Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Coal mine accidents induced by large energy microseisms are frequent and common. Classification of mine microseismic events is an important part of accident treatment and post-disaster recovery production. With the wide application of microseismic monitoring systems, they always generate a large number of microseismic monitoring time series data. Because of microseismic time series, data usually contain a large amount of random environmental noises, and different types of microseismic events have greatly different influences on mining work. So that, to effectively classify microseismic time series data is a key and difficult problem. Aiming at these characters of microseismic data, existing classification methods still have some problems, such as low noise reduction efficiency, low classification accuracy, and poor stability. In terms of these questions, this paper proposes an integrated classification model wavelet dynamic particle swarm optimization random technology extreme learning machine, named WA-DPSO-RTELM. In view of wavelet threshold functions defect with discontinuity and error, firstly, this paper proposes an improved wavelet threshold denoising method and realizes the effective denoising of data and proposes a PSO algorithm with dynamic adjustment factor to realize adaptive denoising. Secondly, this paper proposes a weighted integrate classification method to classify data. In terms of randomness of ELM parameters and uncertainly of the number of ELM hidden nodes leading to the poor classification performance, this paper proposes an ELM’s weight construction method and uses improved ELM-based classifiers to make up for the differences between classifiers and makes the classification results more stable. Finally, in terms of experimental results, the effectiveness of denoising method and integrated classification method is verified by experimental tests. First, in terms of denoising, the proposed method is compared with EMD, Kalman filtering, and DF-CNN methods, and the signal-to-noise ratio (SNR) and mean square deviation (MSE) are improved by about 1.04 and 0.16 on average. Second, it is compared with other advanced methods in classification, and the accuracy and recall are improved by about 1.36 and 1.15 on average. Effective classification of microseismic time series data is becoming more and more important in people’s daily life. This paper combines the advantages of wavelet denoising method and improves threshold function to realize adaptive wavelet coefficients to effectively remove the noise and uses the weighted integrated classification method of microseismic time series data based on ELM to realize the effective classification of time series data. Experimental results show that the proposed WA-DPSO-RTELM model has better classification performance for microseismic time series data set and UCR time series data set compared with state-of-the-art methods. In the future, we will combine the distributed processing framework to process microseismic data and carry out experimental verification in the distributed environment and continue to explore more types of microseismic events that will become a research trend. PubDate: 2022-04-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Reconstructing visual stimuli from brain activity measured by functional magnetic resonance imaging (fMRI) is challenging for fMRI-based decoding. Some previous studies reconstructed 2D visual stimuli by using a voxel-wise encoding model and decoding model. Because nonlinear feature mappings were used in most previous studies, complicated nonlinear decoding methods, such as Bayesian model or deep learning methods, were needed to reconstruct 2D images and could increase the computation complexity and time cost. Although our previous study proposed contrast-disparity local decoding model to reconstruct 3D images from brain activity, the time cost of local decoding models increased with the size of images. In this study, we proposed a novel fast compound reconstruction model that combined the linear encoding–decoding model and the disparity decoding model to reconstruct 3D visual images from the fMRI responses. The results demonstrated that the linear encoding–decoding model successfully reconstructed the 2D contrasts of 3D images from the early visual regions while it failed to reconstruct 3D images directly. The proposed compound reconstruction model successfully reconstructed 3D images by combining the reconstructed 2D contrasts from the early visual region (V1) and the decoded disparities from the dorsal visual region (V3A and V7). In contrast to the contrast-disparity local decoding model, the compound reconstruction model showed significantly better reconstruction performance and much faster training speed. The successful reconstruction of the compound reconstruction model possibly suggested that the contrasts and disparity were firstly processed in two different visual pathways (early and dorsal) separately and the two pathways finally worked together to represent 3D images. PubDate: 2022-04-28
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Recently, deep learning techniques have been applied to solve visual or light detection and ranging (LiDAR) simultaneous localization and mapping (SLAM) problems. Supervised deep learning SLAM methods need ground truth data for training, but collecting such data is costly and labour-intensive. Unsupervised training strategies have been adopted by some visual or LiDAR SLAM methods. However, these methods only exploit the potential of single-sensor modalities, which do not take the complementary advantages of LiDAR and visual data. In this paper, we propose a novel unsupervised multi-channel visual-LiDAR SLAM method (MVL-SLAM) which can fuse visual and LiDAR data together. Our SLAM system consists of an unsupervised multi-channel visual-LiDAR odometry (MVLO) component, a deep learning–based loop closure detection component, and a 3D mapping component. The visual-LiDAR odometry component adopts a multi-channel recurrent convolutional neural network (RCNN). Its input consists of front, left, and right view depth images generated from \(360^{\circ }\) 3D LiDAR data and RGB images. We use the features from a deep convolutional neural network (CNN) for the loop closure detection component. Our SLAM method does not require ground truth data for training and can directly construct environmental 3D maps from the 3D mapping component. Experiments conducted on the KITTI odometry dataset have shown the rotation and translation errors are lower than some of the other unsupervised methods, including UnMono, SfmLearner, DeepSLAM, and UnDeepVO. Experimental results show that our methods have good performance. By fusing visual and LiDAR data, MVL-SLAM has higher accuracy and robustness of the pose estimation compared with other single-modal SLAM systems. PubDate: 2022-04-28
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Intelligent systems have been developed for years to solve specific tasks automatically. An important issue emerges when the information used by these systems exhibits a dynamic nature and evolves. This fact adds a level of complexity that makes these systems prone to a noticeable worsening of their performance. Thus, their capabilities have to be upgraded to address these new requirements. Furthermore, this problem is even more challenging when the information comes from human individuals and their interactions through language. This issue happens more easily and forcefully in the specific domain of Sentiment Analysis, where feelings and opinions of humans are in constant evolution. In this context, systems are trained with an enormous corpus of textual content, or they include an extensive set of words and their related sentiment values. These solutions are usually static and generic, making their manual upgrading almost unworkable. In this paper, an automatic and interactive coaching architecture is proposed. It includes a ML framework and a dictionary-based system both trained for a specific domain. These systems converse about the outcomes obtained during their respective learning stages by simulating human interactive coaching sessions. This leads to an Active Learning process where the dictionary-based system acquires new information and improves its performance. More than 800, 000 tweets have been gathered and processed for experiments. Outstanding results were obtained when the proposed architecture was used. Also, the lexicon was updated with the prior and new words related to the corpus used which is important to reach a better sentiment analysis classification. PubDate: 2022-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Human decision-making is relevant for concept formation and cognitive illusions. Cognitive illusions can be explained by quantum probability, while the reason for introducing quantum mechanics is based on ad hoc bounded rationality (BR). Concept formation can be explained in a set-theoretic way, although such explanations have not been extended to cognitive illusions. We naturally expand the idea of BR to incomplete BR and introduce the key notion of nonlocality in cognition without any attempts on quantum theory. We define incomplete bounded rationality and nonlocality as a binary relation, construct a lattice from the relation by using a rough-set technique, and define probability in concept formation. By using probability defined in concept formation, we describe various cognitive illusions, such as the guppy effect, conjunction fallacy, order effect, and so on. It implies that cognitive illusions can be explained by changes in the probability space relevant to concept formation. PubDate: 2022-04-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Most existing approaches for cross-subject electroencephalogram (EEG) emotion recognition learn the universal features between different subjects with the neurological findings. The performance of these methods may be sub-optimal due to the inadequate investigation of the relationships between the brain and the emotion. Hence, in case of insufficient neurological findings, it is essential to develop a domain adaptation method for EEG data. In this paper, we propose a generator-based domain adaptation method with knowledge free (GDAKF) mechanism for the cross-subject EEG emotion recognition. Specifically, the feature distribution of the source domain is transformed into a feature distribution of the target domain via adversarial learning between the generator and the discriminator. Additionally, the transformation process is constrained by the EEG content regression loss and emotion information loss to maintain the emotional information during the feature alignment. To evaluate the effectiveness and performance of GDAKF, many experiments are carried out on the benchmark dataset, DEAP. The experimental result shows that GDAKF achieves excellent performance with 63.85% mean accuracy in low/high valence, which shows that the proposed method is comparable to the EEG cross-subject emotion recognition methods in the literature. This paper provides a novel idea for addressing cross-subject EEG emotion recognition, and it can also be applied to cross-session and cross-device emotion recognition tasks. PubDate: 2022-04-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract To provide a good study plan is key to avoid students’ failure. Academic advising based on student’s preferences, complexity of the semester, or even background knowledge is usually considered to reduce the dropout rate. This article aims to provide a good course index to recommend courses to students based on the sequence of courses already taken by each student. Hence, unlike existing long-term course planning methods, it is based on graduate students to model the course and not on external factors that might introduce some bias in the process. The proposal includes a novel sequential pattern mining algorithm, called (ES) \(^2\) P (Evolutionary Search of Emerging Sequential Patterns), that properly identifies paths followed by good students and not followed by not so good students, as a long-term course planning approach. A major feature of the proposed (ES) \(^2\) P algorithm is its ability to extract the best k solutions, that is, those with a best recommendation index score instead of returning the whole set of solutions above a predefined threshold. A real study case is performed including more than 13,000 students belonging to 13 faculties to demonstrate the usefulness of the proposal not only to recommend study plans but also to give advices at different stages of the students’ learning process. PubDate: 2022-04-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this work, we consider multitasking in the context of solving multiple optimization problems simultaneously by conducting a single search process. The principal goal when dealing with this scenario is to dynamically exploit the existing complementarities among the problems (tasks) being optimized, helping each other through the exchange of valuable knowledge. Additionally, the emerging paradigm of evolutionary multitasking tackles multitask optimization scenarios by using biologically inspired concepts drawn from swarm intelligence and evolutionary computation. The main purpose of this survey is to collect, organize, and critically examine the abundant literature published so far in evolutionary multitasking, with an emphasis on the methodological patterns followed when designing new algorithmic proposals in this area (namely, multifactorial optimization and multipopulation-based multitasking). We complement our critical analysis with an identification of challenges that remain open to date, along with promising research directions that can leverage the potential of biologically inspired algorithms for multitask optimization. Our discussions held throughout this manuscript are offered to the audience as a reference of the general trajectory followed by the community working in this field in recent times, as well as a self-contained entry point for newcomers and researchers interested to join this exciting research avenue. PubDate: 2022-04-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Background Electroencephalogram technology provides a reference for the study of schizophrenia. Constructing brain functional networks using electroencephalogram technology is one of the important methods to analyze the human brain. Current methods to construct brain functional networks often ignore the deeper interactions between brain regions and the phenomenons that the connectivity patterns of brain change over time. Methods Therefore, for the aided diagnosis of schizophrenia, a hybrid high-order brain functional network model is proposed, the model characterizing more complex functional interactions of brain includes static low-order multilayer brain functional networks and dynamic high-order multilayer brain functional networks. Results The results show that the classification method based on the proposed model is effective and efficient, with an accuracy of 94.05%, a sensitivity of 95.56% and a specificity of 92.31%. Conclusions Experimental results on the schizophrenia dataset show that the proposed method has satisfied performances; the complementarity between low-order and high-order multilayer brain functional networks could better capture brain functional interactions. The findings which suggest the importance of improved relationships between brain regions and temporal features of connectivities in the brain bring new biologically inspired implications. PubDate: 2022-04-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Text classification is a fundamental and important task in natural language processing. There have been many graph-based neural networks for this task with the capacity of learning complicated relational information between word nodes. However, existing approaches are potentially insufficient in capturing semantic relationships between the words. In this paper, to address the above issue, we propose a novel graph-based model where every document is represented as a text graph. Specifically, we devise an attention gated graph neural network (AGGNN) to propagate and update the semantic information of each word node from their 1-hop neighbors. Keyword nodes with discriminative semantic information are extracted via our proposed attention-based text pooling layer (TextPool), which also aggregates the document embedding. In this case, text classification is transformed into a graph classification task. Extensive experiments on four benchmark datasets demonstrate that the proposed model outperforms other previous text classification approaches. PubDate: 2022-04-07
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Many deaths are caused by heart disease. A phonocardiogram (PCG) reflects the general rule of heart movement, so the analysis of heart sound signals is particularly important. In this paper, we propose a new deep neural network termed DsaNet, which is mainly constructed by depthwise separable convolution and the attention module. DsaNet can directly classify PCG signals without complicated feature engineering processes. To address the long-tail distribution problem in the PCG dataset, we adopt a novel imbalanced learning approach (two-stage training) to train our DsaNet. Specifically, we propose a random cropping operation to increase the amount and diversity of the data in the training stage. We also combine random cropping with the idea of integration to improve test accuracy in the testing stage. Moreover, we study the effectiveness of several attention modules and data balancing methods for improving the performance of DsaNet. To verify the performance of DsaNet, we compare our proposed DsaNet with 7 different baseline models on the public 2016 PhysioNet/CinC Challenge dataset. The experimental results show that the proposed DsaNet can achieve competitive performance for imbalanced PCG signal classification with relatively few parameters and computations. Results obtained prove that our model is effective and efficient. In addition, two-stage training significantly improved the generalization performance of DsaNet. PubDate: 2022-03-28
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Picture fuzzy numbers (PFNs) with three degrees of memberships can be used to accurately describe the uncertainty of cognitive information. However, picture fuzzy multi-criteria decision-making (MCDM) methods need to be further studied. This paper describes an extended picture fuzzy multi-objective optimization by ratio analysis and a full multiplicative form (MULTIMOORA) method based on the prospect theory (PT) to handle MCDM. By adopting this process, decision-makers (DMs) can provide fuzzy linguistic terms to evaluate relevant criteria. The evaluation information can be transformed into PFNs based on transformation scales. Then, the corresponding weights of criteria can be calculated according to picture fuzzy entropy. Moreover, the PT, which is considered an important tool for describing the psychological cognition of DMs, can be used to obtain a prospect decision matrix. Here, the MULTIMOORA method, which involves the simultaneous application of the picture fuzzy ratio system, the picture fuzzy reference point, and the picture fuzzy multiplicative form methods, was utilized to determine the final rankings of candidate alternatives. We hence propose an extended picture fuzzy MULTIMOORA method based on the PT, the MULTIMOORA method, and picture fuzzy Dice distance measures, which can be applied to MCDM problems where weight information is completely unknown. The feasibility and validity of the proposed method were verified by applying it to medical institution selection. Sensitivity and comparative analyses demonstrated the superiority of this method compared to the existing ones. PubDate: 2022-03-18
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract With the increasing popularity of short videos on various social media platforms, there is a great challenge for evaluating the aesthetic quality of these videos. In this paper, we first construct a large-scale and properly annotated short video aesthetics (SVA) dataset. We further propose a cognitive multi-type feature fusion network (MVVA-Net) for video aesthetic quality assessment. MVVA-Net consists of two branches: intra-frame aesthetics branch and inter-frame aesthetics branch. These two branches take different types of video frames as input. The inter-frame aesthetic branch extracts the inter-frame aesthetic features based on the sequential frames extracted at fixed intervals, and the intra-frame aesthetic branch extracts the intra-frame aesthetic features based on the key frames extracted by the inter-frame difference method. Through the adaptive fusion of inter-frame aesthetic features and intra-frame aesthetic features, the video aesthetic quality can be effectively evaluated. At the same time, MVVA-Net has no fixed number of input frames, which greatly enhances the generalization ability of the model. We performed quantitative comparison and ablation studies. The experimental results show that the two branches of MVVA-Net can effectively extract the intra-frame aesthetic features and inter-frame aesthetic features of different videos. Through the adaptive fusion of intra-frame aesthetic features and inter-frame aesthetic features for video aesthetic quality assessment, MVVA-Net achieves better classification performance and stronger generalization ability than other methods. In this paper, we construct a dataset of 6900 video shots and propose a video aesthetic quality assessment method based on non-fixed model input strategy and multi-type features. Experimental results show that the model has a strong generalization ability and achieved a good performance on different datasets. PubDate: 2022-03-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Change detection is a significantly important task in the field of remote sensing and can be widely used in the urban construction planning, disaster survey, resource management, etc. Previous studies have shown that most of the state-of-the-art change detection methods are based on the deep learning networks. However, the problem of change detection still cannot be effectively solved due to the variations in illumination, resolution, quality, scale, location, etc. The robust methods for change detection need to be further investigated. This study aimed to address the insufficient robustness problem of the change detection between the multi-temporal images. The biological mechanism of parallel processing architecture in visual pathways gives us an inspiration to design a sensory framework with multi-sensory pathways. We propose a new framework named multi-sensory pathway network (MSPN). This framework is inspired by the parallel processing mechanism of the human visual information. Specifically, the framework utilizes three diverse but related sensory pathways: sensory pathway-1, sensory pathway-2, and sensory pathway-3. The three sensory pathways of the proposed framework are not simply parallel processing, but with some related connections. The sensory pathway-1 adopts the early fusion strategy to learn the changed information. The sensory pathway-2 uses the middle concatenation strategy to learn the changed information, while the sensory pathway-3 utilizes the middle difference strategy to learn the change information. Two fusion strategies, namely average fusion and maximum fusion, are designed for the framework. The experimental datasets consists of BCDD, LEVIR-CD, and CDD. Four metrics including overall accuracy (OA), precision, recall, and F1 are used to evaluate the competitive algorithms. The primary metric is F1. The proposed method, respectively, achieves the best F1 scores with 84.55%, 88.14%, and 85.11% on the three experimental datasets. The quantitative ablation results show the effectiveness of multi-sensory pathways on the BCDD, LEVIR-CD, and CDD. The qualitative ablation results demonstrate that different sensory pathways perform different perception mechanisms, though they belong to the united framework. The comprehensive results of MSPN-AF on the BCDD and MSPN-MF on the LEVIR-CD and CDD are superior to other methods. The experimental results demonstrate the effectiveness and robustness of the proposed method, both qualitatively and quantitatively. The proposed MSPN can promote the technology exploration of the bionic and explainable neural network. PubDate: 2022-03-03
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Vector symbolic architectures (VSA) are a viable approach for the hyperdimensional representation of symbolic data, such as documents, syntactic structures, or semantic frames. We present a rigorous mathematical framework for the representation of phrase structure trees and parse trees of context-free grammars (CFG) in Fock space, i.e. infinite-dimensional Hilbert space as being used in quantum field theory. We define a novel normal form for CFG by means of term algebras. Using a recently developed software toolbox, called FockBox, we construct Fock space representations for the trees built up by a CFG left-corner (LC) parser. We prove a universal representation theorem for CFG term algebras in Fock space and illustrate our findings through a low-dimensional principal component projection of the LC parser state. Our approach could leverage the development of VSA for explainable artificial intelligence (XAI) by means of hyperdimensional deep neural computation. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Keyphrases capture the main content of a free text document. The task of automatic keyphrase extraction (AKPE) plays a significant role in retrieving and summarizing valuable information from several documents with different domains. Various techniques have been proposed for this task. However, supervised AKPE requires large annotated data and depends on the tested domain. An alternative solution is to consider a new independent domain method that can be applied to several domains (such as medical, social). In this paper, we tackle keyphrase extraction from single documents with HAKE, a novel unsupervised method that takes full advantage of mining linguistic, statistical, structural, and semantic text features simultaneously to select the most relevant keyphrases in a text. HAKE achieves higher F-scores than the unsupervised state-of-the-art systems on standard datasets and is suitable for real-time processing of large amounts of Web and text data across different domains. With HAKE, we also explicitly increase coverage and diversity among the selected keyphrases by introducing a novel technique (based on a parse tree approach, part of speech tagging, and filtering) for candidate keyphrase identification and extraction. This technique allows us to generate a comprehensive and meaningful list of candidate keyphrases and reduce the candidate set’s size without increasing the computational complexity. HAKE’s effectiveness is compared to twelve state-of-the-art and recent unsupervised approaches, as well as to some other supervised approaches. Experimental analysis is conducted to validate the proposed method using five of the top available benchmark corpora from different domains and shows that HAKE significantly outperforms both the existing unsupervised and supervised methods. Our method does not require training on a particular set of documents, nor does it depend on external corpora, dictionaries, domain, or text size. Our experiments confirm that HAKE’s candidate selection model and its ranking model are effective. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Working together on complex collaborative tasks requires agents to coordinate their actions. Doing this explicitly or completely prior to the actual interaction is not always possible nor sufficient. Agents also need to continuously understand the current actions of others and quickly adapt their own behavior appropriately. Here we investigate how efficient, automatic coordination processes at the level of mental states (intentions, goals), which we call belief resonance, can lead to collaborative situated problem-solving. We present a model of hierarchical active inference for collaborative agents (HAICA). It combines efficient Bayesian Theory of Mind processes with a perception–action system based on predictive processing and active inference. Belief resonance is realized by letting the inferred mental states of one agent influence another agent’s predictive beliefs about its own goals and intentions. This way, the inferred mental states influence the agent’s own task behavior without explicit collaborative reasoning. We implement and evaluate this model in the Overcooked domain, in which two agents with varying degrees of belief resonance team up to fulfill meal orders. Our results demonstrate that agents based on HAICA achieve a team performance comparable to recent state-of-the-art approaches, while incurring much lower computational costs. We also show that belief resonance is especially beneficial in settings where the agents have asymmetric knowledge about the environment. The results indicate that belief resonance and active inference allow for quick and efficient agent coordination and thus can serve as a building block for collaborative cognitive agents. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Soft clustering can be regarded as a cognitive computing method that seeks to deal with the clustering with fuzzy boundary. As a classical soft clustering algorithm, rough k-means (RKM) has yielded various extensions. However, some challenges remain in existing RKM extensions. On the one hand, the user-defined cutoff threshold is subjective and cannot be changed during iteration. On the other hand, the weight of the object to the cluster center is calculated by membership grade and a subjective parameter, that is, the fuzzifier, which complicates the issue and reduces the robustness of the algorithm. In this paper, inspired by human cognition of distance stability, an adaptive three-way c-means algorithm is proposed. First, in human cognition, objects are clustered according to the stability of their distance to the clusters, and variance is an effective way to measure the stability of data. Based on this, an adaptive cutoff threshold is introduced by determining the maximum increment between the variances of distance. Second, based on the cognition that distance is inversely proportional to weight, the weight equation is defined by distance without introducing any subjective parameters. Then, combined with the adaptive cutoff threshold and weight equation, A-3WCM is proposed. The experimental results show that A-3WCM exhibits excellent performance and outperforms five representative algorithms related to RKM on nine popular datasets. PubDate: 2022-03-01