Journal of Information & Knowledge Management
Journal Prestige (SJR): 0.19 Citation Impact (citeScore): 1 Number of Followers: 313 Hybrid journal (It can contain Open Access articles) ISSN (Print) 0219-6492 - ISSN (Online) 1793-6926 Published by World Scientific [121 journals] |
- On Combining Instance Selection and Discretisation: A Comparative Study of
Two Combination Orders-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kuen-Liang Sue, Chih-Fong Tsai, Tzu-Ming Yan
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Data discretisation focuses on converting continuous attribute values to discrete ones which are closer to a knowledge-level representation that is easier to understand, use, and explain than continuous values. On the other hand, instance selection aims at filtering out noisy or unrepresentative data samples from a given training dataset before constructing a learning model. In practice, some domain datasets may require processing with both discretisation and instance selection at the same time. In such cases, the order in which discretisation and instance selection are combined will result in differences in the processed datasets. For example, discretisation can be performed first based on the original dataset, after which the instance selection algorithm is used to evaluate the discrete type of data for selection, whereas the alternative is to perform instance selection first based on the continuous type of data, then using the discretiser to transfer the attribute type of values of a reduced dataset. However, this issue has not been investigated before. The aim of this paper is to compare the performance of a classifier trained and tested over datasets processed by these combination orders. Specifically, the minimum description length principle (MDLP) and ChiMerge are used for discretisation, and IB3, DROP3 and GA for instance selection. The experimental results obtained using ten different domain datasets show that executing instance selection first and discretisation second perform the best, which can be used as the guideline for the datasets that require performing both steps. In particular, combining DROP3 and MDLP can provide classification accuracy of 0.85 and AUC of 0.8, which can be regarded as the representative baseline for future related researches.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-08-17T07:00:00Z
DOI: 10.1142/S0219649224500813
-
- Detection and Classification of Network Traffic in Bot
Network Using Deep Learning-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: K. Srinarayani, B. Padmavathi, Kavitha Datchanamoorthy, T. Saraswathi, S. Maheswari, R. Fatima Vincy
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
One of the most dangerous threats to computer networks is the use of botnets, which can seriously harm systems and steal private data. They are remote-controlled networks of compromised computers that an individual or group of individuals is using for malicious purposes. These infected computers are frequently called “bots” or “zombies”. A wide variety of malicious activities, including the distribution of malware and credential theft, can be carried out using botnets. The CTU-13 dataset is a collection of network traffic information that includes examples of various botnet types. Using this, our study compares the abilities of decision trees, random forests, 1D convolutional neural networks, and a proposed system based on long short-term memory and residual neural networks to detect botnets. According to our findings, the suggested system performs better than every other algorithm, achieving a higher accuracy rate. Our suggested system has the ability to precisely identify botnet traffic patterns, which can assist organisations in proactively preventing botnet attacks.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-08-09T07:00:00Z
DOI: 10.1142/S0219649224500862
-
- Analysing the Factors Influencing the Inclusive Development of Fisher Folk
Concerning Southern Districts of Tamil Nadu-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: X. Agnes Pravina, R. Radhika
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
The state’s economy depends significantly on the work of fishermen. The purpose of the Development of Fisher folk is to create a community of fishermen that is fully developed in all areas, including education, health, social standing, and economic development. The many Fisher community’s development programs are designed to aid all men, women, youth, and children who work in fishing activities and reside in coastal areas by enhancing their access to education, healthcare, culture, and employment possibilities. The paper examines the factors influencing the inclusive development of fisher folk in southerly districts of Tamil Nadu. Through field surveys, fundamental knowledge is gathered. To compile a thorough profile of the socioeconomic circumstances of the households of the fishermen, an interview schedule has been created. By using convenience sampling, 200 respondents from Tamil Nadu’s southern districts, were included in the final sample set. Therefore, the results revealed that climatic change and inadequate technology significantly impact the Fisher communities’ inclusive development. The outcome of the results also shows the insignificant relation between inadequate facilities for storing the catch impact and inclusive development of fishing communities. Factors influencing inclusive development include socio-demographic characteristics, climate change, lack of occupational return, inadequate storage facilities, lack of knowledge, technology, and financial institutions. Age, marital status, education, and involvement do not significantly impact development. The findings also show that the development of the fishing community is unaffected by a lack of financial institutions, expertise, or awareness. The results confirmed that state assistance was inefficient in reaching the targeted society and emphasised the need for further planned government intervention. By creating additional capacity structure initiatives that maintain and provide continuous social defence and engage the coastal community through an innovative alertness campaign, the authorities may demonstrate their commitment to the full development of aquatic fishermen.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-08-07T07:00:00Z
DOI: 10.1142/S0219649224500643
-
- Generic Semantic Trajectory Data Modelling Approach based on Ontologies
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wided Oueslati, Oumaima Sami, Afef Bahri, Jalel Akaichi
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Advancements in tracking technologies like GPS, RFID and mobile devices have made trajectory data collection widespread. This surge in tracking device usage and location-based services popularity has greatly increased moving object trajectory data availability. The ontological modelling of this kind of data is of paramount importance in understanding and utilising such data effectively. By incorporating maximum semantic data into this model, a variety of essential elements related to mobile object trajectories can be captured. An ontology model rich in semantics not only accurately represents trajectory characteristics but also links them to other relevant elements such as spatial and temporal contexts, movement types and mobile object behaviours. This semantic richness grants the model great adaptability, allowing it to be reused in various contexts related to object mobility and making it generic. Moreover, by integrating this semantic data, the process of analysis and decision-making experiences significant improvement, as it relies on more comprehensive and well-structured information, thereby facilitating informed conclusions and effective strategy implementation. Our objective is to propose a generic ontological model for trajectory data that is rich in semantics and considers the various aspects of moving objects, their movements, their trajectories and their interactions with their environment, aiming to fill the gap identified in other models proposed in the literature.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-08-03T07:00:00Z
DOI: 10.1142/S0219649224500837
-
- The Applications of Social Media for Luxury Brand Management — A
Narrative Review-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ramsha Warsi, Shailja Dixit, Bobby W. Lyall
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Purpose — The emergence of social media and digital technologies has reformed the competitive aspect for firms that quickly acknowledged the increasing relevance of social media platforms for business purposes social media is emerging as an important part of our lives. Luxury brands are gaining popularity as the customers are getting richer. In this paper, we did an extensive review of the research papers to understand the applications of social media for luxury brands management. We also studied the data analysis techniques used in those research papers. Methodology — The papers related to the applications of social media for luxury brand management were collected using related keywords. We searched the papers with the keywords such as “Social media marketing” and “Luxury brands”, “Social media consumer engagement” and “Luxury brands”, “Social media consumer response” and “Luxury brands” and “Social media communication” and “Luxury brands” in Scopus database. Findings — Marketing and customer response are the two main categories of the applications of the social media for luxury brand management. Regarding data analysis techniques, traditional data analysis models such as structural equation modelling, regression, etc. have been used more as compared to data mining, text mining, etc. process. Originality — We propose a new categorisation of different research papers related to applications of social media for luxury brand management. We also did an extensive study of the data analysis techniques to conclude that traditional data analysis models have been mostly used in this research area. The recommendations to social media marketing professionals related to luxury brands have been presented to improve the effectiveness of social media campaigns for luxury brands. We also suggest challenges and future research directions in this area.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-07-31T07:00:00Z
DOI: 10.1142/S0219649224500849
-
- Blockchain Technology and Smart Contract Application in Security
Management of Intelligent Chemical Plants-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Changwen Wang, Junde Su, Hang Liu
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
As blockchain technology and smart contracts develop, computer technology is constantly integrating with smart chemical plants. Due to the continuous development of intelligent chemical plants, their systems have gradually become large and dispersed, posing a threat to safety management. In order to improve the performance of intelligent security management systems, the study first explores the principles of blockchain and smart contract technology, and then combined with the requirements of intelligent chemical plant security management systems, designs an intelligent security management system based on blockchain and smart contract technology. The experimental results showed that compared to systems without smart contract support, the communication success rate between nodes was lower. The error rates of blockchain-based encryption systems, deep learning-based encryption systems and improved data encryption systems proposed in the study were 0.22, 0.07 and 0.09, respectively. The packet loss rates were 0.13, 0.04 and 0.05, respectively. The lower the bit error rate and packet loss rate of the encryption system, the clearer the illegal eavesdropping information. The experimental results indicate that the intelligent security management system designed in this study has good encryption performance and a higher communication success rate. The results have certain reference value in security management application in intelligent chemical plants.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-07-30T07:00:00Z
DOI: 10.1142/S0219649224500850
-
- Intelligent Image Compression Model on the Basis of Wavelet Transform and
Optimized Fuzzy C-Means-Based Vector Quantisation-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Pratibha Pramod Chavan, Mayank Singh
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Compressed images are frequently used to accomplish computer vision tasks. There is an extensive use of traditional image compression standards including JPEG 2000. However, they would not consider the present solution. We determined a new image compression model that was inspired by the existing research on the medical image compression model. Here, the images are filtered at the preprocessing step to eradicate the noises that exist. The images are then decomposed using discrete wavelet transform (DWT). The outcome is then vectored quantized. In this step, we employ optimisation-assisted fuzzy [math]-means clustering for vector quantisation (VQ) with codebook generation. Considering this as an optimisation issue, a new hybrid optimisation algorithm called Bald Eagle Updated Pelican Optimization with Geometric Mean weightage (BUPOGM) is introduced to solve it. The algorithm is a combination of pelican optimisation and bald eagle optimisation, respectively. Quantised coefficients are finally encoded via the Huffman encoding process, and the compressed image is represented by the resultant bits. The outcome of the proposed work is satisfactory as it performs better than the other state-of-the-art methods.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-07-27T07:00:00Z
DOI: 10.1142/S0219649224500503
-
- Library Similar Literature Screening System Research Based on LDA Topic
Model-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Liang Gao, Fang Cui, Chengbo Zhang
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Science and technology are highly inheritable undertakings, and any scientific and technological worker can make good progress without the experience and achievements of predecessors or others. In the face of an ever-expanding pool of literature, the ability to efficiently and accurately search for similar works is a major challenge in current research. This paper uses Latent Dirichlet Allocation (LDA) topic model to construct feature vectors for the title and abstract, and the bag-of-words model to construct feature vectors for publication type. The similarity between the feature vectors is measured by calculating the cosine values. The experiment demonstrated that the precision, recall and WSS95 scores of the algorithm proposed in the study were 90.55%, 98.74% and 52.45% under the literature title element, and 91.78%, 99.58% and 62.47% under the literature abstract element, respectively. Under the literature publication type element, the precision, recall and WSS95 scores of the proposed algorithm were 90.77%, 98.05% and 40.14%, respectively. Under the combination of literature title, abstract and publication type elements, the WSS95 score of the proposed algorithm was 79.03%. In summary, the study proposes a robust performance of the literature screening (LS) algorithm based on the LDA topic model, and a similar LS system designed on this basis can effectively improve the efficiency of LS.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-07-12T07:00:00Z
DOI: 10.1142/S0219649224500771
-
- Fake News Detection: Traditional vs. Contemporary Machine Learning
Approaches-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Aditya Binay, Anisha Binay, Jordan Register
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Fake news is a growing problem in modern society. With the rise of social media and ever- increasing internet accessibility, news spreads like wildfire to millions of users in a very short time. The spread of fake news can have disastrous consequences, from decreased trust in news outlets to overturned elections. Such concerns call for automated tools to detect fake news articles. This study proposes a predictive model that can check the authenticity of a news article. The model is constructed using two different techniques to construct our model: (1) linguistic features and (2) feature extraction. We employed some widely used traditional (e.g. K-nearest neighbour (KNN) and support vector machine (SVM)) as well as state-of-the-art (e.g. bidirectional encoder representations from transformers (BERT) and extreme machine learning (ELM)) machine learning algorithms using feature extraction methods and linguistic features. After generating the models, performance metrics (e.g. accuracy and precision) are used to compare their performance. The model generated via logistic regression using feature hashing vectorisation emerged as the best model, with 99% accuracy. To the best of our knowledge, no extant studies have compared the traditional and contemporary methods in this context and demonstrated the traditional ones to be better performers. The fake news detection model can help curb the spread of fake news by acting as a tool for news organisations to check the authenticity of a news article.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-28T07:00:00Z
DOI: 10.1142/S0219649224500758
-
- Freedom from Feardom – Harnessing Women Empowerment through Personal
Safety Mobile Applications-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: S. Vijayakumar Bharathi, Kanchan Pranay Patil, Dhanya Pramod
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Whilst the world is witnessing the impact of technology proliferation on human lives and livelihoods, the personal safety of women, though paramount, is still technologically under-addressed. This study empirically investigated the perception of Indian women (N = 210) towards personal safety apps and their intention to accept them to ensure personal safety. This study uniquely blended the Fogg behaviour model, which comprises motives, abilities and triggers, with the Technology Acceptance Model (TAM), which comprises perceived usefulness, ease of use and behavioural intentions. Structural equation modelling using SmartPLS 4 was used to analyse the model. Some exciting outcomes emerged from this study. The motives namely subjective norms, facilitating conditions and perceived trust, significantly impacted women’s perceived usefulness of personal safety apps, while the perceived risk was insignificant. The significant impactors of women’s perceived ease-of-use of personal safety apps include the abilities of self-efficacy and technology stress, but exclude perceived behavioural control. With regard to the trigger, only response efficacy impacted women’s behavioural intentions to use personal safety apps, while the magnitude of noxiousness and exposure expectancy did not. Women’s perception of the usefulness and ease of use of personal safety apps significantly impacted their behavioural intentions, ultimately impacting their perception of personal safety. Further, this study presented implications to theory and practice before concluding by stating research limitations and future directions.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-27T07:00:00Z
DOI: 10.1142/S0219649224500710
-
- Role Balance Assignment Based on OCAT Method in Human Resource Planning
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiaoping Que
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
The core of human resources is the division of labour of enterprise employees. A human resource allocation method based on the E-CARGO model was designed to improve the development potential of enterprises and balance the preferences of managers and the task execution of employees in human resource allocation. This method was used to analyse the preferences of enterprise managers for employees and describe their preferences using the E-CARGO model. Then the OCAT method was used to mine the relationship between the team execution and the preferences of managers to find the balance between the two. The results showed that the experimental Scheme found the balance between the team execution and the preferences of managers. The original experimental scheme found three balance points, the improved scheme found three balance points, and the improved scheme found one balance point and one balance interval. Among the three experimental schemes, the improvement Scheme 1 achieved the highest execution ability and the shortest time. The research successfully analyses the relationship between the team execution and the preferences of managers in enterprise human resource assignment and puts forward a human resource assignment scheme that takes both into account.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-27T07:00:00Z
DOI: 10.1142/S0219649224500722
-
- Interpretive Structural Modelling Approach to Evaluate Knowledge Sharing
Enablers in Circular Supply Chain: A Study of The Indian Manufacturing
Sector-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Anirban Ganguly, John V. Farr
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Knowledge sharing can be considered an important activity to improve the performance among various entities of a supply chain. The purpose of this study is to identify and evaluate a set of critical knowledge-sharing enablers that might aid in successfully managing a circular supply chain (CSC) in the context of the Indian manufacturing sector. The knowledge-sharing enablers were determined through a review of the extant literature, coupled with discussion with subject matter experts (SMEs). The quantitative technique of interpretive structural modelling (ISM) was used to analyse the identified knowledge-sharing enablers. The findings of this study revealed that the knowledge-sharing capabilities of an organisation, organisation structure and support from the top management formed the most significant enablers for Indian manufacturing organisations. This study has significant managerial and academic contributions. While supply chain managers can use the findings of this study to gain a better understanding of the role of knowledge sharing in managing CSC in the Indian manufacturing context, policymakers can use these findings to formulate strategies for effectively managing the CSC, as well as improving its operational effectiveness. The findings can also aid academic researchers to further analyse the role that knowledge sharing might play in successfully managing CSC, including other industries (for example, service industries), as well as other geographical regions.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-27T07:00:00Z
DOI: 10.1142/S021964922450076X
-
- Deep Reinforcement Learning for Financial Forecasting in Static and
Streaming Cases-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Aravilli Atchuta Ram, Sandarbh Yadav, Yelleti Vivek, Vadlamani Ravi
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Literature abounds with various statistical and machine learning techniques for stock market forecasting. However, Reinforcement Learning (RL) is conspicuous by its absence in this field and is little explored despite its potential to address the dynamic and uncertain nature of the stock market. In a first-of-its-kind study, this research precisely bridges this gap, by forecasting stock prices using RL, in the static as well as streaming contexts using deep RL techniques. In the static context, we employed three deep RL algorithms for forecasting the stock prices: Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimisation (PPO) and Recurrent Deterministic Policy Gradient (RDPG) and compared their performance with Multi-Layer Perceptron (MLP), Support Vector Regression (SVR) and General Regression Neural Network (GRNN). In addition, we proposed a generic streaming analytics-based forecasting approach leveraging the real-time processing capabilities of Spark streaming for all six methods. This approach employs a sliding window technique for real-time forecasting or nowcasting using the above-mentioned algorithms. We demonstrated the effectiveness of the proposed approach on the daily closing prices of four different financial time series dataset as well as the Mackey–Glass time series, a benchmark chaotic time series dataset. We evaluated the performance of these methods using three metrics: Symmetric Mean Absolute Percentage (SMAPE), Directional Symmetry statistic (DS) and Theil’s U Coefficient. The results are promising for DDPG in the static context and GRNN turned out to be the best in streaming context. We performed the Diebold–Mariano (DM) test to assess the statistical significance of the best-performing models.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-27T07:00:00Z
DOI: 10.1142/S0219649224500801
-
- Modelling and Analysis of Smart Tourism Based on Deep Learning and
Attention Mechanism-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Miao Dong, Shihao Dong, Weichang Jiang
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In the current traditional tourism recommendation systems, significant amounts of manpower and resources are required to manually identify the characteristics of resources, resulting in extremely poor economic benefits. To address this issue, this study proposes a smart tourism model based on deep learning and attention mechanisms. It uses a deep learning model to extract semantic information and improves it with the attention mechanism. This is to enable the model to take into account the complete meaning of the text and the association between individual words, thereby achieving a more comprehensive extraction of tourism resource features. The experiment showcases that the [math]-value of the algorithm proposed by us reached 0.961, the Recall value reached 0.958, the accuracy reached 0.980 and the area under the receiver operating characteristic curve reached 0.956. All parameters are superior to the comparison algorithm, and in practical application testing, its fitting degree reached 0.981. The above results indicate that the smart tourism proposed by us based on deep learning and attention mechanism has excellent performance in the field of tourism resource recommendation, which can effectively extract hidden features from the resources. This can also accurately push the tourism resources that users are interested in, which can effectively promote the integration and development of the tourism industry and the Internet, and has strong positive significance for economic development.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-27T07:00:00Z
DOI: 10.1142/S0219649224500825
-
- A Brief Survey of Text Mining: Domains, Implemented Algorithms and
Evaluation Metrics-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: D. Kavitha, G. S. Anandha Mala
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Among text-oriented applications, Natural Language Processing (NLP) plays a significant role in managing and identifying a particular text data. NLP is broadly used in text mining domains as well. In general, text mining is the process of combining diverse techniques to characterise and transform the text. Hence, the syntactic information and semantic information are utilised together in the NLP model to assist the process of analysing, or extracting the text. On the other hand, the text mining model is examined by different standard measurements, which also vary concerning text objectives or applications. Due to the advent of machine and deep learning models, text mining has become the hot research area used in various domains like classification, recognition, sentiment analysis, and speech- related topics. Though the models are not time-consuming and effective, certain factors are considered for further enhancement of these models. Thus, this survey paper elucidates the evaluation metrics used in text mining approaches by deploying standard algorithms. It explores the literature work on the formerly implemented text mining approaches for analysing the evaluation metrics in text mining. In addition to this, the proposed model to perform text mining in every survey paper is analysed. Further, it provides the algorithmic categorisation of existing research works, discussion on different datasets used with the consideration of various evaluation metrics, and finally categorises the metrics used for analysing the performance of such text mining approaches. This survey work also illustrates the merits and demerits of the existing text-mining approaches. Finally, the research gaps and challenging issues are given to direct future work.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-26T07:00:00Z
DOI: 10.1142/S0219649224500783
-
- Automation and the Labour Market: A Systematic Literature Review Using
Bibliometric Analysis of 20 Years (2002-2022)-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hue Truong Thi, Hang Trinh Thi Thu, Duong Bui Thi Quynh
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
This study uses bibliometric analysis to integrate, synthesise, and expand the knowledge regarding the relationship between automation and the labour market. In this paper, the authors examined the Web of Science (WoS) core collection database for articles published between 2002 and 2022. The co-citation, co-occurrence, and publication patterns were analysed using VOSviewer 1.6.19. The study comprised 287 papers, with the United States having the highest percentage of research publications, followed by Germany, China, and the United Kingdom. The institutional study shows that the Massachusetts Institute of Technology, Boston University, National Bureau of Economic Research, Harvard University, and the University of London are all leading institutions in this field of study and have more than 100 links. The co-occurrence of keywords revealed “automation”, “employment”, “growth”, and “jobs” as the most discussed terms. The paper concludes by identifying gaps in the literature and proposing possibilities for future studies.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-25T07:00:00Z
DOI: 10.1142/S0219649224500734
-
- Adoption of Fintech for Sustainable Administrative Efficiency of Higher
Educational Institutions: Bangladesh Perspective-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Md. Momin Uddin, Shaharia Sultana, Sharmin Rima
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Fintech solutions offer innovative tools and platforms that streamline financial operations, enhance convenience, and improve efficiency. The purpose of this research is to examine the impact of fintech on administrative efficiency among universities in Bangladesh. To fulfil this purpose, this research assesses the current level of fintech adoption among university students in Bangladesh and seeks to gain insights into students’ perceptions and attitudes towards fintech, particularly in relation to its impact on administrative tasks within their respective universities. Finally, this research identifies the challenges and opportunities associated with the integration of fintech in administrative processes and explores potential strategies to address them effectively. This research used both quantitative and qualitative strategies to achieve these goals. A survey was conducted to collect quantitative data from the university students of Bangladesh. Findings show that the adoption and usage of fintech have a statistically significant positive impact on administrative efficiency, resource allocation and utilisation, enhanced communication, and a sustainable and eco-friendly administrative environment.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-22T07:00:00Z
DOI: 10.1142/S0219649224500746
-
- AHBSMO-DRN: Single Device and Multiple Sharing-Based Geo-Position Spoofing
Detection in Instant Messaging Platform-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Shweta Koparde, Vanita Mane
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In recent years, location check-in on mobile components is a trending topic over social media. At the same time, hackers grasp the geographical position (geo-position) data that destruct the security of users. Hence, it is crucial to detect the originality of geo-position. A plethora of methods have been developed for geo-position spoofing identification that depends on geo-position data. Nonetheless, such techniques are incapable in terms of missing prior data or insufficient of large samples. To counterpart this issue, an effective model is invented to detect spoofing activity by Adaptive Honey Badger Spider Monkey Optimization_Deep residual Network (AHBSMO-based DRN). Here, neuro camera footprint refining is performed using Neuro Fuzzy filter and extracted footprint image obtained while considering the input and spoofed image are fused using Pearson correlation coefficient. Meanwhile, geo-tagged value of input image and spoofed image is also fused based on same Pearson coefficient. Finally, fusion is performed and then, spoofing detection is accomplished by comparing the Discrete Cosine Transform (DCT) foot print of two images to find if the input image is spoofed or not. Moreover, AHBSMO-based DRN model has gained outstanding outcomes in regard of accuracy of 0.921, True Positive Rate (TPR) 0 of 0.911, and False Positive Rate (FPR) of 0.136.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-17T07:00:00Z
DOI: 10.1142/S0219649224500680
-
- Application Analysis of Music Video Retrieval Technology Based on Dynamic
Programming in Piano Performance Teaching-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Linna Huang
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
With the development of Internet technology, music videos on the network are becoming increasingly rich. How to extract concert video clips for specific scenes or shots from massive video libraries or ultra-long video files is a relatively difficult issue. Traditional music video retrieval methods are mostly based on key text retrieval. However, they cannot meet the needs of users. At the same time, in response to the demand for specific videos in piano performance teaching, it is also difficult for these methods to filter out key music clips from numerous videos. Therefore, a music video retrieval technology is constructed based on video feature similarity calculation. Aiming at the shortcomings of video similarity calculation methods, a dynamic programming algorithm is used to improve it. The improved music video retrieval technology is applied to the classroom learning practice of piano performance teaching, verifying the actual effect of this technology. The experimental results show that the accuracy of the music video retrieval technology reaches 91.02%. After being applied to piano classroom teaching, the overall performance of students has been improved. This shows that the proposed music video retrieval technology can effectively achieve the retrieval of required videos and improve the effectiveness of piano classroom teaching.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-13T07:00:00Z
DOI: 10.1142/S0219649224500527
-
- The Effect of Knowledge Hiding on Academic and Employee Performances of
the Private Universities in Mogadishu, Somalia-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mohamud Ahmed Mohamed, Fadumo Aden Iidle, Ibrahim Hassan Mohamud
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
The primary objective of this research is to examine the correlation between knowledge hiding and academic and employee performances in the setting of private institutions in Mogadishu. This study used a quantitative methodology to carry out field research with a sample size of 120 academic staff members. The data collection method was executed meticulously, ensuring that the study’s findings maintain high validity and reliability. Statistical software such as SPSS and Smart PLS were subsequently utilised to analyse the data. The research findings indicate that including evasive hiding positively impacts academic and employee performances. Play dumb and rational hiding strategies negatively impact academic and employee arrangements within the context of private universities in Mogadishu. The presented empirical data contribute to the current theoretical understanding of the detrimental impacts of knowledge hiding. They precisely examine the widespread occurrences of evasive, play dumb and reasonable hiding. This study contributes substantially to the current scholarly debate around knowledge hiding inside academic institutions, providing valuable insights into the adverse outcomes associated with this phenomenon. A list of recommendations for future research was provided in the study in response to the identified limitations.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-10T07:00:00Z
DOI: 10.1142/S021964922450059X
-
- Wound Tissue Segmentation and Classification Using U-Net and Random Forest
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: V. S. Arjun, Leena Chandrasekhar, K. U. Jaseena
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Analysing wound tissue is a crucial research field for assessing the progression of wound healing. Wounds exhibit certain attributes concerning colour and texture, although these features can vary among different wound images. Research in this field serves multiple purposes, including confirming the presence of chronic wounds, identifying infected wounds, determining the origin of the wound and addressing other factors that classify and characterise various types of wounds. Wounds pose a substantial health concern. Currently, clinicians and nurses mainly evaluate the healing status of wounds based on visual examination. This paper presents an outline of digital image processing and traditional machine learning methods for the tissue analysis of chronic wound images. Here, we propose a novel wound tissue analysis system that consists of wound image pre-processing, wound area segmentation and wound analysis by tissue segmentation. The wound area is extracted using a simple U-Net segmentation model. Granulation, slough and necrotic tissues are the three primary forms of wound tissues. The [math]-means clustering technique is employed to assign labels to tissues. Within the wound boundary, the tissue classification is performed by applying the Random Forest classification algorithm. Both segmentation (U-Net) and classification (Random Forest) models are trained, and the segmentation gives 99% accuracy, and the classification model gives 99.21% accuracy.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-10T07:00:00Z
DOI: 10.1142/S021964922450062X
-
- Sentiment Analysis-Based Automatic Stress and Emotion Recognition using
Weighted Fused Fusion-Based Cascaded DTCN with Attention Mechanism from
EEG Signal-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Atul B. Kathole, Savita Lonare, Gulbakshee Dharmale, Jayashree Katti, Kapil Vhatkar, Vinod V. Kimbahune
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
When loaded with difficulties in fulfilling daily requirements, a lot of people in today’s world experience an emotional pressure known as stress. Stress that lasts for a short duration of time has more advantages as they are good for mental health. But, the persistence of stress for a long duration of time may lead to serious health impacts in individuals, such as high blood pressure, cardiovascular disease, stroke and so on. Long-term stress, if unidentified and not treated, may also result in personality disorder, depression and anxiety. The initial detection of stress has become more important to prevent the health issues that arise due to stress. Detection of stress based on brain signals for analysing the emotion in humans leads to accurate detection outcomes. Using EEG-based detection systems and disease, disability and disorders can be identified from the brain by utilising the brain waves. Sentiment Analysis (SA) is helpful in identifying the emotions and mental stress in the human brain. So, a system to accurately and precisely detect depression in human based on their emotion through the utilisation of SA is of high necessity. The development of a reliable and precise Emotion and Stress Recognition (ESR) system in order to detect depression in real-time using deep learning techniques with the aid of Electroencephalography (EEG) signal-based SA is carried out in this paper. The essentials needed for performing stress and emotion detection are gathered initially from benchmark databases. Next, the pre-processing procedures, like the removal of artifacts from the gathered EEG signal, are carried out on the implemented model. The extraction of the spectral attributes is carried out from the pre-processed. The extracted spectral features are considered the first set of features. Then, with the aid of a Conditional Variational Autoencoder (CVA), the deep features are extracted from the pre-processed signals forming a second set of features. The weights are optimised using the Adaptive Egret Swarm Optimisation Algorithm (AESOA) so that the weighted fused features are obtained from these two sets of extracted features. Then, a Cascaded Deep Temporal Convolution Network with Attention Mechanism (CDTCN-AM) is used to recognise stress and emotion. The validation of the results from the developed stress and emotion recognition approach is carried out against traditional models in order to showcase the effectiveness of the suggested approach.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-07T07:00:00Z
DOI: 10.1142/S0219649224500618
-
- Application of Particle Swarm Optimisation in Multi-Objective Cost
Optimisation of Engineering Enterprises under the Background of Digital
Economy-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lin Song
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Engineering projects must meet quality and schedule requirements during construction. This is a typical multi-objective problem and a difficult point in the management of engineering enterprises. To address these issues, a research study proposes an intelligent multi-objective optimisation technique. First, analyse the optimisation objectives of the enterprise in the context of digitalisation. Then, construct a multi-objective cost optimisation model for engineering enterprises. Second, the Multi-Objective Particle Swarm Optimisation (MOPSO) algorithm is introduced to solve multi-objective problems. To improve the multi-objective optimisation effect of the model, the inertia weight parameters and particle learning behaviour are optimised and adjusted, as the model is prone to getting stuck in local optima. In the performance test of the algorithm model, the optimised MOPSO model can accurately search for the minimum value of 0 at the position (0, 0) under the Rastrig in function, and at the same time, the number of iteration convergence is the least. The GA, ACOM, and traditional MOPSO models have more iterative convergence times, and the optimisation results are 0.10, 0.15, and 0.14, respectively. It can be seen that the performance of the optimised MOPSO model is better. In the specific example analysis, using the optimised MOPSO solution, the project cost was reduced from 31 million yuan in the contract to 30.52 million yuan, and the construction period was shortened from 588 days to 540 days, and met the environmental protection and quality requirements. The research content can provide important decision support for engineering project managers.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-07T07:00:00Z
DOI: 10.1142/S0219649224500667
-
- Constructing and Realising an Employment Platform for Slash Youth in the
Age of Digital Intelligence-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xue Xiang
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
The current employment environment is becoming increasingly complex, with many job seekers competing with each other in more concentrated and narrower fields, worsening the job market as well as inhibiting the career potential of job seekers. There is a need to provide better employment guidance and employment quality assessment for slash youth. This study attempts to design a job recommendation model for slash youths by combining an improved collaborative filtering algorithm and a dynamic bilateral matching algorithm (BMA). The test results show that the precision rate of the BMA is always the largest with the increase of the number of clusters, with the highest value reaching 90.04%; the average ranking inverse curve of bilateral matching has the fastest growth rate, with the maximum value of 62.04%, which is 34.26% and 10.06% higher than the other two maximum values, and the optimal number of clusters is set to 24. The highest precision rate of the algorithm is 82.17% when the number of recommendations is 10. The algorithm also performed better in terms of recommendation diversity, with a maximum value of around 0.28. The recommendation success rate and satisfaction value reached 87.72% and 47.86%, respectively. The recommendation precision of the model designed in this study is high. It is conducive to solving problems such as difficulty in recruiting and finding jobs, and promotes the healthy development of the recruitment market.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-07T07:00:00Z
DOI: 10.1142/S0219649224500709
-
- Evaluation and Screening of Technological Innovation and Entrepreneurship
Based on Improved BPNN Model-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yan Zhou
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Aiming at the low success rate of incubation investment of China’s technology-based start-ups, how to scientifically evaluate technology-based start-ups has become an important issue that needs to be faced. Considering the expert consultation characteristics of the Delphi method is extremely suitable for decision-making problems in the field of uncertainty, as well as the strong nonlinear mapping ability and learning ability of the backpropagation neural network (BPNN). According to the characteristics of the evaluation object and following the principle of index selection, the study uses the Delphi method to determine the evaluation index system suitable for technology-based entrepreneurial enterprises in the current environment and obtain the scores of each index. Based on the established evaluation index system, the BPNN evaluation model is further constructed, and its parameters are optimised to improve its performance. Aiming at the problem that it is easy to fall into a local optimal, a genetic algorithm (GA) is used to optimise it, and a GA-BPNN model is constructed. The comprehensive capability of GA-BPNN is evaluated by using the excellent nonlinear characteristic analysis ability of GA-BPNN to provide a reference for important decisions such as investment. Using BPNN simulation, it was concluded that the correct rate of evaluation of qualified enterprises was between 23.32% and 89.99%, with an average correct rate of 58.32%. The average correct rate was 80.99%. The evaluation accuracy rate was unstable and the average accuracy rate was low. The optimised GA-BPNN model had an average evaluation accuracy rate of 80.32% for qualified enterprises and 93.66% for unqualified enterprises, and the average evaluation accuracy rate increased by 21.99% and 12.66%, respectively. The effectiveness of the model and algorithm was verified. It shows that the GA-BPNN model can be used as an effective tool for the evaluation and screening of technology-based entrepreneurial enterprises. The evaluation system of technology-based entrepreneurial enterprises established by research is scientific and can be applied in practice.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-06T07:00:00Z
DOI: 10.1142/S0219649224500655
-
- Poster Design Research Based on Deep Learning Automatic Image Generation
Algorithm-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiaoxi Fan, Yao Sun
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Although this research has made significant progress in image generation models, they still face issues such as insufficient diversity of generated images, poor quality of high-resolution images, and the need for a large amount of training data for model optimization. This paper studies poster design based on deep learning automatic image generation algorithm, using a recursive supervised image generation algorithm framework of generative adversarial networks for multi-view image generation and super-resolution generation tasks of small sample digital poster images. Various improvements have been proposed to enhance the performance of the GAN network model for poster design image generation tasks. Based on experimental research, this paper’s model uses generative adversarial networks to distinguish randomly cropped low resolution and high-resolution poster blocks, ensuring that high-resolution posters maintain their original resolution canvas texture and brush strokes, effectively improving the automatic generation effect of poster images. The evaluation results show that the quantitative evaluation of the proposed algorithm model in knowledge management is distributed in a reasonable range, which indicates that the proposed algorithm model has good performance in knowledge management. The poster design model based on deep learning automatic image generation algorithm proposed in this paper has certain effects. In subsequent practice, the automatic image generation algorithm can be combined with practical needs to improve the efficiency and design effect of poster design.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-06-05T07:00:00Z
DOI: 10.1142/S0219649224500692
-
- Adoption of Electronic Knowledge Repositories: Influencing Factors in the
Indian Software Industry-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mitali Chugh, Rajesh Kumar Upadhayay
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Abstract. Purpose: This study intends to examine the factors that impact the intention and actual usage of Electronic Knowledge Repositories (EKR) by knowledge seekers in the Indian software sector. The study examines how perceived utility, perceived output quality, resource availability, and perceived organisational structure support influence the adoption of EKR. Design/methodology/approach: The data were gathered from 505 employees in 27 software engineering companies in the National Capital Region (NCR) of India using a self-administered survey. The study employed structural equation modelling (SEM) to examine the connections among the variables. Findings: The findings suggest that perceived usefulness, perceived output quality, resource availability, and reported organisational structure support have a favourable impact on the desire to adopt EKR. Using EKR favourably influences its utilisation by knowledge seekers in the Indian software sector. Research limitations/implications:The study focussed exclusively on the software sector in the NCR area of India, perhaps restricting the applicability of the results. Future studies should examine more variables that impact EKR adoption and study various organisational contexts. Practical implications: The research indicates that organisations should prioritise improving the perceived utility of EKR, assuring high output quality, allocating sufficient resources for EKR access, and maintaining supporting organisational structures to encourage EKR adoption among knowledge seekers. Originality/value: This study experimentally investigates the characteristics that influence EKR adoption in the Indian software sector, contributing to existing literature. The results offer useful insights for firms looking to enhance their knowledge management strategies by effectively utilising EKR.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-31T07:00:00Z
DOI: 10.1142/S0219649224500564
-
- Student Psychology Teaching Learning Optimisation-Based Deep Long
Short-Term Memory for Predicting Student Performance-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: K. Sharada
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Knowledge Tracing (KT) represents an analysis of the state in terms of knowledge among students to predict if the student can answer a problem based on test results. Generally, a human teacher tracks the knowledge of students and customises the teaching based on the needs of the students. Nowadays, the rise of online education platforms leads to the development of machines for tracking the knowledge of students and improving their learning experience. The accuracy of the classical KT techniques needs to be improved. Thus, this paper implemented the Student Psychology Teaching Learning Optimisation-based Deep Long Short-Term Memory (SPTLO-based DLSTM) for predicting student performance. Here, [math]-score normalisation is adapted for performing normalisation of data to make the data value rely on a specific range. Furthermore, the Synthetic Minority Oversampling Technique (SMOTE) is engaged to augment data to make data apt for enhanced handling. The Deep Maxout Network (DMN) with Ruzicka similarity is considered for feature fusion. The integration of deep KT to predict student performance is executed with Deep Long Short-Term Memory (DLSTM), which is trained to employ SPTLO. The SPTLO is generated by unifying Student Psychology Based Optimisation (SPBO) and Teaching-Learning-Based Optimisation (TLBO). Here, SPTLO-based DLSTM presented supreme accuracy of 92.5%, Mean Absolute Error (MAE) of 0.064 and Root mean square error (RMSE) of 0.312.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-30T07:00:00Z
DOI: 10.1142/S0219649224500400
-
- Improved Association Rule Mining-based Data Sanitisation with Blockchain
for Secured Supply Chain Management-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Priti S. Lahane, Shivaji R. Lahane
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
A supply chain management (SCM) method must include information sharing as a vital component in order to improve supply chain performance and boost an organisation’s strategic advantage. Since, due to a lack of trust concern over information leakage, and security breaches by nefarious individuals or groups, several organisations are hesitant to share information with their supply chain partners. This work presents a new supply chain management-based secure data transmission method. By using blockchain-based data storage, it is assumed that the manufacturers, suppliers, and customers would transfer data that must be kept private during transmission. As a consequence, this paper aims to provide an improved association rule mining with a data sanitisation scheme with an improved Apriori algorithm used in the proposed data sanitisation process. In particular, the Long Short-Term Memory (LSTM) will generate keys by considering the objective relying on the value of the preservation ratio, false rule generation, hiding failure, and degree of modification. The weights are adjusted via a novel Minkowski distance-based Namib beetle optimisation (MDNBO) technique, which also improves the performance of the LSTM model. The reverse process of encryption occurs when encrypted data are restored at the receiving end. By contrasting it with the old methods with regard to security as well, the proposed protected data in SCM with blockchain technology will be proved to be efficient.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-27T07:00:00Z
DOI: 10.1142/S0219649224500412
-
- Personalised Recommendation of Literary Learning Resources Based on a
Mixed Recommendation of Learning Interest and Contextual Awareness-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Min Guo
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In an effort to improve the efficiency and recommendation accuracy of mobile learning resources, the study proposes a hybrid mobile learning strategy based on Collaborative Filtering (CF), context and interest. Analyse from the perspective of situational awareness, construct a personalised recommendation model for text learning resources based on GimbalTM, and obtain a recommendation form. The experimental results show that the RMSE and MAE of Context-Collaborative filtering (C-CF) are lower than those of traditional CF. The Precision and Recall values of C-CF are higher than those of CF at 10 s, the recommendation growth rates of traditional CF and C-CF are 2.09% and 1.67%, respectively. The Gimbal software enables a certain degree of learner location detection and can trigger contextual rules based on time and location contexts to provide users with personalised text-based learning resources. The research results indicate that in specific applications, over time, under the recommendation system, students’ grades steadily increase, which is also beneficial for improving their learning efficiency.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-27T07:00:00Z
DOI: 10.1142/S0219649224500552
-
- Untold Intelligence: Tacit Knowledge and Marketing Success
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Moin Ahmad Moon, Ansar Abbas
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Salespersons are a source of invaluable market knowledge that can help change the course of organisational strategy. However, research on extracting this knowledge is scarce. Therefore, this research aims to develop and empirically test a model that explains the tacit knowledge exchange process between salespersons and marketing. Data were collected from 224 randomly selected business-to-business and business-to-consumer salespersons (boundary spanners) of commercial banks in Pakistan. Structural equation modelling (SEM) via maximum likelihood estimation (MLE) was conducted using AMOS 24. Except for inter-functional communication quality and inter-functional conflict, all antecedents significantly influence tacit knowledge exchange. Tacit knowledge exchange significantly affects relative efficiency, marketing program innovation and relative effectiveness. Training programs that bring together the sales and marketing employees to increase trust, increase socialisation opportunities and increase task conflict may strengthen the tacit knowledge exchange process in the banking sector.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-27T07:00:00Z
DOI: 10.1142/S0219649224500679
-
- Construction of Accounting Fraud and Its Audit Countermeasure Model Based
on Computer Technology-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yuanbao Wang, Guangliang Zhu
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
A prediction model of financial fraud of listed companies based on machine learning method is proposed to predict financial fraud of listed companies. Using the data set of Chinese listed companies from 2000 to 2020 as observation samples, Benford’s Law, LOF local anomaly method and SMOTE oversample were adopted, grey samples were excluded, and characteristic variables were selected from five aspects: fraud motivation, solvency, profitability, cash flow and operating capacity. The financial fraud identification model Xscore is established based on the XGBoost method. The Xscore model can improve the accuracy of model prediction, and is superior to the Fscore model and Cscore model in accuracy, recall rate, AUC index, KS value, PSI stability, etc. It is more suitable for predicting the financial fraud of listed companies in China. The results of this study are helpful in promoting the research and application of artificial intelligence and machine learning in accounting, and provide references for promoting the disclosure of high-quality financial information by listed companies and maintaining the order of the capital market.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-21T07:00:00Z
DOI: 10.1142/S0219649224500424
-
- The Impact of Knowledge Management Practices on Organisational
Performance: Case Study in a Public Organisation-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ahmed Ledmaoui, Bouchaib Mokhtari
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
This study delves into the strategic challenge of Knowledge Management (KM) in organisations, particularly in the public sector, and aims to identify the impact of KM practices on organisational performance. By examining the relationship between knowledge as a valuable resource for competitive advantage, the study identifies the factors that affect knowledge formalisation and preservation in organisations. The study surveyed several department heads from a public organisation to analyse the impact of KM practices, and found that practices such as knowledge capitalisation, formalisation, sharing and inventory have a positive effect on organisational performance. The study also sheds light on the importance of understanding the context and potential barriers to effective KM, particularly department heads’ reluctance to share knowledge due to concerns over losing power. The contribution of this study lies in bridging the gap between theory and practical context in KM practices and identifying the factors that affect organisational performance.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-21T07:00:00Z
DOI: 10.1142/S0219649224500588
-
- Unveiling and Modelling the Impact of Learning Organisations on
Information Technology Employees’ Career Advancement and Retention-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Vijayabanu Chidambaram, Rajagopalan Aravamudhan
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Learning organisations facilitate the acquisition of new knowledge and skills for both organisations and their members, enabling the application of the latest insights in a dynamic environment. This study addresses a research gap by constructing a model identifying the factors influencing learning organisations and their subsequent impact. The research delves into uncovering the catalysts behind learning organisations concerning the career advancement of information technology (IT) employees and their intent to remain in their roles. The study, conducted in a developing country, India, employs an explanatory research design to explore the interrelationship between learning organisations, career progression and employee retention. Primary and secondary data have been used for this study. The primary data has been collected from 389 IT sector employees at various employment positions in Chennai through a structured and standard questionnaire based on the Dimensions of Learning Organizations Questionnaire (DLOQ) by Watkins, KE and Marsick, VJ (2023) [Rethinking workplace learning and development catalyzed by complexity. Human Resource Development Review, 22(3), 333–344. doi:10.1177/15344843231186629], demonstrating a high reliability at 96.4%.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-20T07:00:00Z
DOI: 10.1142/S0219649224500631
-
- H-mrk-means: Enhanced Heuristic mrk-means for Linear Time Clustering of
Big Data Using Hybrid Meta-heuristic Algorithm-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Digvijay Puri, Deepak Gupta
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Big data is generally derived with a large volume and combined categories of attributes like categorical and numerical. Among them, [math]-prototypes have been adopted into MapReduce structure, and thus, it provides a better solution for the huge range of data. However, [math]-prototypes need to compute all distances among every data point and cluster centres. Moreover, the computations of distances are redundant as data points are often present in similar clusters after fewer iterations. Nowadays, to cluster huge-scale datasets, one of the efficient solutions is [math]-means. However, [math]-means is not intrinsically appropriate to execute in MapReduce due to the iterative nature of this technique. Moreover, for every iteration, [math]-means should perform an independent MapReduce job but, it leads to higher Input/Output (I/O) overhead at every iteration. This research paper presents a novel enhanced linear time clustering for handling big data called Heuristic mrk-means (H-mrk-means) using optimized [math]-means on the MapReduce model. In order to manage big data that is time series in nature, the sampling and MapReduce framework are adopted, which utilize different machines for processing data. Before initiating the main clustering process, a sampling process is adopted to get the noteworthy information. The two main phases of the developed method are the map phase (divide and conquer) and the reduce phase (final clustering). In the map phase, the data are divided into diverse chunks that should be stored in assigned machines. In the reduce phase, data clustering is performed. Here, the cluster centroid of data is tuned with the help of hybrid Tunicate-Deer Hunting Optimization (T-DHO) algorithm by attaining a newly derived objective function. This type of optimal tuning of solution enhances the efficiency of clustering when compared over normal iterative [math]-means and mrk-means clustering. The experimental evaluation on varied counts of chunks using the proposed H-mrk-means has attained higher quality of clustering results and faster execution times evaluated with other clustering approaches.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-05-11T07:00:00Z
DOI: 10.1142/S0219649224500540
-
- Heterogeneous Internet of Things Big Data Analysis System Based on Mobile
Edge Computing-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lin Yang
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
The big data heterogeneous Internet of Things (IoT) requires mobile edge computing (MEC) to process some data, and the data analysis system of MEC often has the problem of excessive terminal energy consumption (ECS) or long delay. So this study designed an energy-saving optimization algorithm for the task offloading processing module in the big data heterogeneous IoT analysis system, and designed and conducted simulation experiments to verify the application performance of the algorithm. The experimental results show that the #04 scheme of the designed algorithm has the lowest terminal ECS under the same conditions. Choosing the #04 scheme to build the algorithm, comparative analysis shows that when the edge server (ES) computing rate is 10 cycles/s, the weighted sum values of terminal ECS for EOPU, MPCO, exhaustive search, and local computing methods are 23.6 J, 23.9 J, 28.5 J and 84.5 J, respectively. Moreover, the algorithm possesses a significantly higher percentage of remaining time under different conditions of total SMD devices and total subchannels compared to other methods. This indicates that the designed algorithm can markedly enhance the processing performance of the task offloading model of the big data heterogeneous IoT data analysis system, and can also effectively reduce terminal ECS and system latency. The research results can provide reference for improving the processing ability of heterogeneous IoT big data analysis systems. The contribution of this study to the academic field lies in providing a model that can effectively reduce the operational ECS and time consumption of heterogeneous IoT big data analysis systems containing mobile animal networking devices. Moreover, from an industrial perspective, the results of this study contribute to improving the efficiency of information exchange and processing in the field of IoT computing, thereby promoting the promotion of IoT technology.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-27T07:00:00Z
DOI: 10.1142/S0219649224500473
-
- Citizen Satisfaction through the Development of a Sustainable Mobile
Government Service Model — A Blended Approach through M-S-QUAL and EGAM
Theories-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kanchan Pranay Patil, S. Vijayakumar Bharathi
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
This research examines the relationship between sustainable m-Gov services and citizen satisfaction with m-Gov services. A multidimensional conceptualisation of sustainable m-Gov services is defined to examine citizen satisfaction. The research model was tested using empirical data collected from 687 m-Gov service users through PLS-SEM. The results showed that service availability, contact, responsiveness, efficiency, and privacy significantly influenced m-Gov service quality. Mobile self-efficacy, perceived trust, and perceived functional benefit are critical for m-Gov adoption. However, perceived compatibility and perceived ability-to-use did not explain the m-Gov adoption. The findings of m-Gov service quality and m-Gov adoption interactions supported their role in predicting sustainable m-Gov services, thereby increasing citizen satisfaction. The outcome of this study is vital for government strategies, public administration, policymakers, and government service delivery literature and provides citizen-centric m-Gov services. Thus, the government and citizens adopting m-Gov services can benefit from the tested model towards increasing the sustainable offering of m-Gov services.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-25T07:00:00Z
DOI: 10.1142/S0219649224500485
-
- Identifying and Improving Problems and Risks of Management Strategies
Based on GQM+Strategies Metamodel and Design Principles-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Chimaki Shimura, Hironori Washizaki, Yohei Aoki, Takanobu Kobori, Kiyoshi Honda, Yoshiaki Fukazawa, Katsutoshi Shintani, Takuto Nonomura
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Due to the criticality of software and IT in today’s business environment, organisations often align business goals and IT strategies. One alignment method is GQM+Strategies[math] (GQM+Strategies[math] is a registered trademark (No. 302008021763 at the German Patent and Trade Mark Office and international registration number IR992843). Although GQM+Strategies employs a vertical refinement tree grid based on rationales to align business goals and IT strategies in each department and throughout the organisation, it allows multiple perspectives. This can lead to strategic problems and risks because the GQM+Strategies grid may be unclear. To address this deficiency, this study defines modelling rules with a metamodel specified using a Unified Modelling Language (UML) class diagram and employs design principles described by Object Constraint Language (OCL) to automatically configure GQM+Strategies grids. These design principles are used in an experiment as the evaluation criteria to assess potential strategic problems and risks. The results confirm that our method has the potential to support the construction of GQM+Strategies grids with a consistent perspective, aiding in the alignment of business goals and IT strategies throughout an organisation.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-25T07:00:00Z
DOI: 10.1142/S0219649224500497
-
- Development and Implementation of a Multilayer Deep Learning-Based Bank
Credit Risk Forecasting System-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xiaohui Long
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
The complexity of the financial environment and the international community makes the capital flow face various challenges, and it is difficult to obtain accurate credit prediction results in the actual application environment. Considering the complex non-linear characteristics of customer information, the Analytic Hierarchy Process is studied to meet the needs of bank credit risk assessment. On this basis, a depth neural network with different complexities was selected for the three indicators built to classify the features. The composition of the neural network module and the number of neurons were determined by experiment, and Dropout was used to prevent overfitting of the test dataset. Stability and ablation experiments showed that the model can control the error between datasets to 0.021. The ablation experiment showed that the numbers of hidden layers and neurons were the best. Simulation tests showed that the sensitivity and accuracy of this method were 85.25% and 92.55%, respectively, which were superior to other classification methods. The real data of banks in the past four years were tested. The results could accurately classify the risks of enterprises and individual customers, and the results of stress test showed that the model is stable. It is found that traditional credit risk assessment models rely on statistical means and rule decisions, and these methods may not fully reveal the complex non-linear relationship and the internal relationship of financial indicators in high-dimensional data. The combination of deep learning technology and hierarchical analysis can better deal with and explain the complex non-linear problems in bank risk assessment.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-25T07:00:00Z
DOI: 10.1142/S0219649224500515
-
- Leadership Dynamics in the Knowledge-Based Landscape: Unravelling the
Mediating Forces of Cognition on Innovative Behaviour-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Weilee Lim, Tarique Mahmood, Syeda Alina Zaidi, Younnus Muhammad Areeb
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In the dynamic knowledge-based landscape, effective leadership has a crucial role in sharing knowledge and expertise towards innovation. Multiple previous research studies have studied the correlation between leadership style and innovative behaviour. Nevertheless, few empirical studies have investigated the mediating influence of psychological empowerment on the connection between organisational leadership and innovative behaviour. The study also examines how creative self-efficacy influences the connection between entrepreneurial leadership and innovative behaviour. This study addressed the knowledge gap by examining the correlation between transformational leadership, entrepreneurial leadership, and innovative behaviour. Additionally, it sought to explore the role of psychological empowerment and creative self-efficacy as mediators in the relationship between entrepreneurial leadership, transformational leadership, and innovative behaviour. A questionnaire was administered to 228 individuals who are employees working in Pakistan’s SMEs. The data were then analysed using Structural Equation Modelling (SEM) with Smart-PLS software. The findings suggest transformational leadership and entrepreneurial leadership heightened innovative behaviour among employees through psychological empowerment and creative self-efficacy. No direct impact was established for transformational leadership on innovative behaviour indicating the presence for mediator. This study contributes to the existing literature by offering empirical proof of the correlation between transformational leadership, entrepreneurial leadership, and innovative behaviour. It also examines the influence of psychological empowerment and creative self-efficacy on innovative behaviour. The practical implication is that to encourage innovative behaviour, leaders need to get down the field to empower and build confidence among employees.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-25T07:00:00Z
DOI: 10.1142/S0219649224500606
-
- A Theoretical Study of the Representational Power of Weighted Randomised
Univariate Regression Tree Ensembles-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Amir Ahmad, Sami M. Halawani, Ajay Kumar, Arshad Hashmi, Mutasem Jarrah, Abdul Rafey Ahmad, Zia Abbas
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Univariate regression trees have representation problems for non-orthogonal regression functions. Ensembles of univariate regression trees have better representational power. In some cases, weighted ensembles have shown better performance than unweighted ensembles. In this paper, we study the properties of ensembles of regression trees by using regression classification models. We propose a theoretical framework to study the representational power of infinite-sized weighted ensembles, consisting of randomised finite-sized regression trees. We show for some datasets that the weighted ensembles may have better representational power than unweighted ensembles, but the performance is highly dependent on the weighting scheme and the properties of datasets. Our model cannot be used for all the datasets. However, for some datasets, we can accurately predict the experimental results of ensembles of regression trees.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-19T07:00:00Z
DOI: 10.1142/S021964922450045X
-
- ICT Adoption and Its Effects on the Economic Growth of Bangladesh: A Time
Series Analysis-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Md. Mominul Islam, Enamul Hafiz Latifee, Sabikun Nahar Sumi, Md. Zulfiker Hayder
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Understanding information and communication technology (ICT) adoption’s particular effects on a nation like Bangladesh is critical because ICT adoption continues to play a significant role in promoting economic progress globally. By analysing the relationships between ICT use and economic growth in Bangladesh, this study examines data from 2002 through 2021 using a time series analysis. ICT export, gross capital formation, mobile cellular subscriptions, government budget allocation, and foreign direct investment are just a few of the independent variables the study takes into account in relation to ICT adoption. These factors were selected based on their applicability to the adoption of ICT and their potential impact on economic expansion. According to the results of the time series research, the ICT sector’s growing exports have a favourable and considerable impact on long-term economic growth. Mobile cellular subscriptions negatively affect economic growth, with significant short-term impact and insignificant long-term effect. The influence of the government budget is detrimental but insignificant for both long-term and short-term economic growth. The study comes to the conclusion that Bangladesh’s economic development is significantly benefited by the adoption of ICT. The study also highlights the significance of government assistance through budgetary allocation and strategies that draw in foreign direct investment. The results can direct the development of policies and evidence-based decision-making to use ICT adoption as an accelerator for sustained economic development in Bangladesh.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-19T07:00:00Z
DOI: 10.1142/S0219649224500539
-
- High-Performance Work Systems in Service Industries: A Bibliometric and
Thematic Analysis-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Padamata Karthik, Vangapandu Rama Devi
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
The imperative need of this hour is to measure the influence of high-performance work system (HPWS) research and comprehend its patterns as HPWS became more prevalent in service- oriented businesses. Underscoring this, authors aim to shed light on the publication trends and set the future research schema in the arena of HPWS with special reference to the service sector context. The study adopts a bibliometric approach as authors intend to identify and analyse the research breadth of HPWS research in the service context across the world, so that a statistical and analytical comprehensive overview of the research and suggestions for future directions can be provided. A portfolio of 262 articles was extracted from the Scopus database and various bibliometric techniques were used to analyse the collected research articles using “R” programming with biblioshiny web interface. The bibliometric results revealed the dynamics of research trends in HPWS service context studies, the most influential publications, authors, sources, the most productive countries, and affiliations. The citation analysis revealed the most cited scientific publications, and the countries from which most citations were received. Likewise, the thematic analysis revealed the underlying themes and patterns of HPWS service context studies that emerged with time. In such ways, this study contributes to the literature by depicting the intellectual landscape of HPWS research in the service context, that will be useful to the researchers, academicians, practitioners, policy makers and funding agencies.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-19T07:00:00Z
DOI: 10.1142/S0219649224500576
-
- Sustainable Scientific and Technological Talents Recommendation Method
Based on Recommendation Algorithm-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Bei Zhang
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In today’s rapid development of global technology, the global demand for scientific and technological talents remains high. To provide a reliable talent referral channel, a talent recommendation model based on Bidirectional Encoder Representation from Transformers (BERT) and Bi-directional Long Short-Term Memory (BLSTM) was constructed. This model enables the matching of talented individuals with job opportunities. The results demonstrated that the accuracy and F1 value of BLSTM-BERT in the test set were 0.95 and 0.92, respectively. The precision rate, recall rate, F1-socre value and accuracy rate of BLSTM-CNN model were 0.96, 0.97, 0.96 and 0.97, respectively. The correct prediction rate of the talent recommendation model for the four types of talents was 1.0. It is evident that the talent recommendation model has a high accuracy in predicting talent categories and can precisely recommend necessary scientific and technological professionals for businesses.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-04-03T07:00:00Z
DOI: 10.1142/S0219649224500436
-
- Cluster-Based Cross Layer-Cross Domain Routing Model with DNN-Based Energy
Prediction-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Shivaji R. Lahane, Priti S. Lahane
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In WSN, extending the network life is still a major problem that needs to be tackled. Cross-layer protocols are used to get around these problems. In this paper, a new cross-layer design routing model is presented using a clustering-based technique. The proposed model is proceeding with the optimal cluster-based routing model via a new algorithm. Initially during the network generation, the node’s energy is predicted by the deep learning model termed as DNN model on the basis of the distance between node and sink as the input. Subsequently, during the clustering process, the cluster head is optimally selected via a new optimisation algorithm named Self-Improved Shuffle Shepherd Optimisation (SISSO) Algorithm. The cluster head selection is done by considering the constraints including Link quality, Distance, Overhead, Energy and Delay. Finally, a Modified Kernel Least Mean Square (MKLMS)-based data aggregation process is to eliminate the redundant data transmission. The performance of the SISSO method is proven superior over other conventional approaches with regard to the alive node and network lifetime. In the alive node analysis of supernode, the proposed SISSO model achieves the maximal number of alive supernodes at 2,000 rounds (i.e. 0.67) than other conventional methods.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-03-26T07:00:00Z
DOI: 10.1142/S0219649224500369
-
- Construction of a Sustainable Training System for Engineering and
Technological Innovation Talents Based on CIPP Model in the Digital Era-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Qingjun Liang
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
With the advent of the digital era, the demand for engineering and technological talents in the market is increasing. It is crucial to design a sustainable training and evaluation system suitable for engineering and technological innovation talents. However, traditional data evaluation models have problems with unsuitable evaluation indicators and low model accuracy. Therefore, the paper selects the decision-making-oriented evaluation model to build the basic talent training evaluation indicators, and design expert questionnaires based on the Delphi method to screen indicators. Moreover, the paper also combines the adaptive genetic algorithm optimised backpropagation neural network to establish a talent cultivation evaluation model, and conducts simulation experiments using MATLAB to verify its feasibility. The results showed that the evaluation accuracy [math] value of the research model was 0.99635, and the fitness value was 1.34, which was 0.5 higher than the unmodified model and achieved good evaluation results. At the same time, by comparing with the traditional genetic algorithm optimised model and the unimproved backpropagation model, the average evaluation accuracy of the research model increased by 66.44% and 13.59%, and the recall rate increased by 10.79% and 23.96%. The research model has improved the accuracy of evaluation, and its adaptability has also been enhanced, achieving superior evaluation results, which has important value in the cultivation of engineering and technological innovation talents.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-03-26T07:00:00Z
DOI: 10.1142/S0219649224500448
-
- Rap-Densenet Framework for Network Attack Detection and Classification
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Arun Kumar Silivery, Kovvur Ram Mohan Rao, Suresh L. K. Kumar
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Cybersecurity is becoming increasingly important with the rise in Internet usage. The two most frequent cyberattacks that can seriously harm a website or a server and render them inaccessible to other customers are denial of service (DoS) and distributed denial of service (DDoS) attacks. These attacks are so common and take many different forms, it is difficult to identify and respond to them with previous methods. Furthermore, computational complexity, inconsistency, and irrelevant data are problems for traditional intrusion detection methods. As a result, a powerful deep learning-based technique is applied in this study for the identification and categorisation of DoS and DDoS attacks. Refined Attention Pyramid Network (RAPNet)-based feature extraction is used in this proposed framework to extract features from the input data. Then, Binary Pigeon Optimisation Algorithm (BPOA) is used to determine the best features. After choosing optimal characteristics, Densenet201-based deep learning is deployed to categorise the assaults in Bot-IoT, CICIDS2017, and CICIDS2019 datasets. Furthermore, the Conditional Generative Adversarial Network (CGAN) is used to provide extra data samples for minority classes to address the issue of imbalanced data. The findings show that the proposed model can precisely identify and categorise DoS and DDoS assaults in comparison to the existing intrusion detection approaches with 99.43%, 99.26%, and 99.38% accuracy for CICIDS2019, CICIDS2017, and BoT-IoT correspondingly.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-03-21T07:00:00Z
DOI: 10.1142/S0219649224500333
-
- A Comprehensive Survey on Deep Learning Techniques for Digital Video
Forensics-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: T. Vigneshwaran, B. L. Velammal
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
With the help of advancements in connected technologies, social media and networking have made a wide open platform to share information via audio, video, text, etc. Due to the invention of smartphones, video contents are being manipulated day-by-day. Videos contain sensitive or personal information which are forged for one’s own self pleasures or threatening for money. Video falsification identification plays a most prominent role in case of digital forensics. This paper aims to provide a comprehensive survey on various problems in video falsification, deep learning models utilised for detecting the forgery. This survey provides a deep understanding of various algorithms implemented by various authors and their advantages, limitations thereby providing an insight for future researchers.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-03-21T07:00:00Z
DOI: 10.1142/S0219649224500345
-
- MeSH-Based Semantic Weighting Scheme to Enhance Document Indexing:
Application on Biomedical Document Classification-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Imen Gabsi, Hager Kammoun, Dalila Souidi, Ikram Amous
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Document indexing phase plays a significant role in text mining applications such as text document classification. The common indexing paradigm is based on terms frequency in documents known as Bag Of Words (BOW)-based representation approach. However, such classical approach suffers from ambiguity and disparity of words. In addition, traditional term weighting schemes, such as TF-IDF, exploit only the statistical information of terms in documents. To overcome these problems, we have been interested in biomedical semantic document indexing using concepts extracted from the knowledge resource MeSH. Thus, we have focused first on a disambiguation method to identify the adequate senses of ambiguous MeSH concepts and we have considered four representation enrichment strategies to identify the best appropriate representatives of the adequate sense in the textual entities representation. Second, we propose to introduce a semantic weighting scheme that quantifies MeSH concept’s importance in documents through their occurrence frequency and semantic similarities with unambiguous MeSH concepts. Our contribution lies particularly in the in-depth experimental study of the performance of these methods and precisely the impact of the semantic weighting scheme on the performance. To do that, three benchmark datasets TREC 2004 genomics, BioCreative II and OHSUMED were used.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-03-21T07:00:00Z
DOI: 10.1142/S0219649224500357
-
- Grammatical versus Spelling Error Correction: An Investigation into the
Responsiveness of Transformer-Based Language Models Using BART and
MarianMT-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Rohit Raju, Peeta Basa Pati, SA Gandheesh, Gayatri Sanjana Sannala, KS Suriya
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Text continues to remain a relevant form of representation for information. Text documents are created either in digital native platforms or through the conversion of other media files such as images and speech. While the digital native text is invariably obtained through physical or virtual keyboards, technologies such as OCR and speech recognition are utilised to transform the images and speech signals into text content. All these variety of mechanisms of text generation also introduce errors into the captured text. This project aims at analysing different kinds of errors that occur in text documents. The work employs two of the advanced deep neural network-based language models, namely, BART and MarianMT, to rectify the anomalies present in the text. Transfer learning of these models with available dataset is performed to finetune their capacity for error correction. A comparative study is conducted to investigate the effectiveness of these models in handling each of the defined error categories. It is observed that while both models can bring down the erroneous sentences by 20+%, BART can handle spelling errors far better (24.6%) than grammatical errors (8.8%).
Citation: Journal of Information & Knowledge Management
PubDate: 2024-03-21T07:00:00Z
DOI: 10.1142/S0219649224500370
-
- Social Recommendation Framework: A Case Study of Chinese Long-Stayers in
Chiang Mai-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Achara Khamaksorn, Danaitun Pongpatcharatorntep, Sirikorn Santirojanakul, Die Hu
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Chiang Mai (CNX), a popular city in northern Thailand, has attracted an increasing number of Chinese visitors to stay for a long term for diverse purposes, which facilitates local economic and cultural development. As social networks (SNs) are widely used to disseminate information and accelerate problem-solving, social recommendations (SRs) can be generated correspondingly to address the diverse and dynamic long-term residential demands of Chinese users in a multicultural context. This research aims to develop an SN-based recommendation framework for Chinese long-stayers in CNX to address the social recommendation problems for target long-stay users in a cross-cultural context. This paper employed a mixed-method research design based on the knowledge management processes to acquire, store, share and apply knowledge needed for SN analysis. The results showed that the proposed framework effectively provides filtering and efficient SRs that enable Chinese users to make decisions and formulate strategies during their long-term residence in CNX. The preliminary work also illustrates the positive impact of individual demographic and SN characteristics on the performance of SRs that researchers and practitioners need to develop innovative business and management strategies regarding Chinese long-stayers in a cross-cultural context. Further studies are required to identify additional factors to enhance the effectiveness and utility of SRs for Chinese long-term residents in a cross-cultural environment.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-03-21T07:00:00Z
DOI: 10.1142/S0219649224500394
-
- Harnessing Attention-Based Graph Recurrent Neural Networks for Enhanced
Conversational Flow Prediction via Conversational Graph Construction-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: R. Sujatha, K. Nimala
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Conversational flow refers to the progression of a conversation, encompassing the arrangement of topics discussed and how responses are delivered. A smooth flow involves participants taking turns to speak and respond naturally and intuitively. Conversely, a more disjointed flow may entail prolonged pauses or difficulties establishing common ground. Numerous factors influence conversation flow, including the personalities of those involved, their familiarity with each other, and the contextual setting. A conversational graph pattern outlines how a conversation typically unfolds or the underlying structure it adheres to. It involves combining different sentence types, the sequential order of topics discussed, and the roles played by different individuals. Predicting subsequent sentences relies on predefined patterns, the context derived from prior conversation flow in the data, and the trained system. The accuracy of sentence predictions varies based on the probability of identifying sentences that fit the subsequent pattern. We employ the Graph Recurrent Neural Network with Attention (GRNNA) model to generate conversational graphs and perform next-sentence prediction. This model constructs a conversational graph using an adjacency matrix, node features (sentences), and edge features (semantic similarity between the sentences). The proposed approach leverages attention mechanisms, recurrent updates, and information aggregation from neighbouring nodes to predict the next node (sentence). The model achieves enhanced predictive capabilities by updating node representations through multiple iterations of message passing and recurrent updates. Experimental results using the conversation dataset demonstrate that the GRNNA model surpasses the Graph Neural Network (GNN) model in next-sentence prediction, achieving an impressive accuracy of 98.89%.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-03-19T07:00:00Z
DOI: 10.1142/S0219649224500382
-
- Role of Blockchain Technology in Smart Era: A Review on Possible Smart
Applications-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Amit Kumar Tyagi, Swetta Kukreja, Richa, Poushikkumar Sivakumar
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
By the end of 2021, the worldwide cryptocurrency market valuation had hit a whole high of $3 trillion. Blockchain technology underpins cryptocurrencies such as Bitcoin and Ethereum. The adoption of blockchain, as well as the technology and products it enables, will continue to have a significant influence on company operations. However, blockchain technology is much more than a secure cryptocurrency transfer method. It may be utilised in areas other than finance, including as healthcare, insurance, voting, welfare benefits, gaming and artist royalties. The global economy is prepared for the blockchain revolution, with the technology already having an influence on business and society on many levels. If the term “revolution” seems extreme, consider that eight of the world’s ten largest corporations are developing a variety of blockchain-based solutions. Any enterprise or organisation that is engaged in the recording and oversight of any type of transaction stands to profit from shifting its operations to a blockchain-based platform. This paper discusses about the various roles blockchain has taken up over the years in several different industries along with the future opportunities and scope of expansion to numerous other professional sectors.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-02-27T08:00:00Z
DOI: 10.1142/S0219649224500321
-
- Development of Honey Badger-Cat Swarm Optimisation-Based Parallel Cascaded
Deep Network for Software Bug Prediction Framework-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Anurag Gupta, Mayank Sharma, Amit Srivastava
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Software bug prediction is mainly used for testing and code inspection. So, software bug prediction is carried out by network measures over the decades. But, the classical fault prediction method failed to obtain the semantic difference among various programs. Thus, it degrades the working of the prediction model, which is designed using these aspects. It is necessary to obtain the semantic difference for designing the prediction model accurately and effectively. In a software defect prediction system, it faces many difficulties in identifying the defect modules like correlation, irrelevance aspects, data redundancy, and missing samples or values. Consequently, many researchers are designed to detect software bug prediction that categorises faulty as well as non-faulty modules with software matrices. But, there are only a few works focussed to mitigate the class imbalance problem in bug prediction. In order to overcome the problem, it is required to develop an efficient software bug prediction method with the enhanced classifier. For this experimentation, the input data are taken from the standard online data sources. Initially, the input data undergo pre-processing phase and then, the pre-processed data are provided as input to the feature extraction by utilising the Auto-Encoder. These obtained features are utilised in getting the optimal fused features with the help of a new Hybrid Honey Badger Cat Swarm Algorithm (HHBCSA). Finally, these features are fed as input to the Optimised Parallel Cascaded Deep Network (OPCDP), where the “Extreme Learning Machine (ELM) and Deep Belief Network (DBN)” are used for the prediction of software bugs, in which the parameters from both classifiers are optimised by proposed HHBCSA algorithm. From the investigations, the recommended software bug prediction method offers a quicker bug prediction result, which helps to detect and remove the software bug easily and accurately.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-02-01T08:00:00Z
DOI: 10.1142/S0219649224500047
-
- Design and Development of Intelligent Learning System for University
Innovation and Entrepreneurship Based on Knowledge Visualisation-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Bibo Feng
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
To build a comprehensive learning resource system covering learning process, learning users, learning resources and other aspects, an intelligent learning system for innovation and entrepreneurship in universities was constructed and designed combined with knowledge visualisation theory. The system includes student information record module, system recommendation calculation module, learning material module, and knowledge visualisation module. The research focuses on the system recommendation calculation module, including the User-based Collaborative Filtering Algorithm (UCF) and the Joint Recommendation (JR) algorithm of the application content recommendation algorithm. The best neighbour number of JR is 20, and the best transmission path length is 2. The recommendation effect is optimal under this setting. The convergence iterations of the training and test set of the joint recommendation algorithm are 80 times and 100 times, respectively. For the training set and test set, the accuracy, precision, sensitivity, recall, running time, error, and other performance evaluation indicators of the joint recommendation algorithm are better than the corresponding values of other recommendation algorithms. The system has a good design effect and experience effect in the application of the online learning platform.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-31T08:00:00Z
DOI: 10.1142/S0219649224500242
-
- Cyberbullying Detection Model for Arabic Text Using Deep Learning
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Reem Albayari, Sherief Abdallah, Khaled Shaalan
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In the new era of digital communications, cyberbullying is a significant concern for society. Cyberbullying can negatively impact stakeholders and can vary from psychological to pathological, such as self-isolation, depression and anxiety potentially leading to suicide. Hence, detecting any act of cyberbullying in an automated manner will be helpful for stakeholders to prevent any unfortunate results from the victim’s perspective. Data-driven approaches, such as machine learning (ML), particularly deep learning (DL), have shown promising results. However, the meta-analysis shows that ML approaches, particularly DL, have not been extensively studied for the Arabic text classification of cyberbullying. Therefore, in this study, we conduct a performance evaluation and comparison for various DL algorithms (LSTM, GRU, LSTM-ATT, CNN-BLSTM, CNN-LSTM and LSTM-TCN) on different datasets of Arabic cyberbullying to obtain more precise and dependable findings. As a result of the models’ evaluation, a hybrid DL model is proposed that combines the best characteristics of the baseline models CNN, BLSTM and GRU for identifying cyberbullying. The proposed hybrid model improves the accuracy of all the studied datasets and can be integrated into different social media sites to automatically detect cyberbullying from Arabic social datasets. It has the potential to significantly reduce cyberbullying. The application of DL to cyberbullying detection problems within Arabic text classification can be considered a novel approach due to the complexity of the problem and the tedious process involved, besides the scarcity of relevant research studies.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-30T08:00:00Z
DOI: 10.1142/S0219649224500163
-
- The Role of Information Technology in Strengthening Strategic Flexibility
and Organisational Resilience of Small Medium Enterprises Post COVID-19-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ragmoun Wided
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In fact, most organisations invest great resources to develop and integrate information technology (IT) and its use. Information technology capabilities (ITC) can generate a positive impact on organisations in many different ways. This study analyses how ITC affect organisational resilience (OR) through information quality (IQ) and strategic flexibility (SF). It studies the mediating role of IQ and the modulating effect of environmental turbulences (market and technological). The developed model was appreciated using data from 400 firms. The data were treated using structured equation modelling (SEM). The findings indicate that ITC strongly and positively impact SF through IQ to increase OR. We also demonstrate that environmental turbulence moderates the influence of SF on OR, but does not have an effect on the interrelationship between IT and SF. The developed and tested model defines a critical and pragmatic pathway for OR through a capability view.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-23T08:00:00Z
DOI: 10.1142/S0219649224500011
-
- HOTCP: Hybrid Optimal Test Case Prioritisation with Multi-Objective
Constraints-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mukund Baburao Wagh, Vishal V. Puri, Sanjay B. Waykar, Rajesh Kadu
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
As a result of late detection and resource limitations during any software evaluation, there have been several software-related breakdowns or malfunctions. Many people have begun focussing on the test cases or alternatively the priority of validation suites after identifying the difficulties in the regression testing process of any product. The test case prioritisation technique is presented as a solution to this problem. It increases the fault detection rate. Earlier research studies have been implemented many techniques, but the rate of fault detection is not up to the mark. To overcome this drawback, we proposed HOTCP (Hybrid Optimal Test Case Prioritisation with Multi-Objective Constraints) model, which includes two steps: first is test case generation and the next one is test case prioritisation. The test case is generated from the released software. Consequently, test case prioritisation will be done by the optimisation strategy, in which the multi-objective function will be defined based on the constraints like statement coverage, branch coverage, contribution index and fault exposing potential. For this optimisation process, a new algorithm is proposed termed as CCCOA (Customised Coot and Chimp Optimisation Algorithm). The COOT optimisation and Chimp optimisation algorithms are combined in this algorithm. The system produces prioritised test cases, and the performance of the proposed method is validated in comparison to the traditional methods in terms of several metrics.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-23T08:00:00Z
DOI: 10.1142/S0219649224500126
-
- Blockchain-Assisted Access Control with Shuffled Shepherd Fire Hawk
Optimisation-Key Generation for Privacy Protection in Cloud-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Vitthal Sadashiv Gutte, Yogita Hande, Sarika Tanaji Deokate, Poonam Chandrakant Bhosale, Yogesh R. Kulkarni
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Cloud is a model of computing providing on-demand availability of computer system resources, mainly data storage and computing power, without the user’s active management. This provides sharing and reduces user computing and storage costs. With the development of cloud scale and intensification, cloud security is becoming a vital issue in the cloud computing field. Here, a newly optimised hybridisation algorithm, namely, Shuffled Shepherd Fire Hawk Optimization (SSFHO) algorithm is proposed for key generation to initiate privacy protection in the cloud. The different entities involved in this approach are Data Owner (DO), Data User (DU), blockchain, and Cloud Service Provider (CSP). Here, the access control process includes seven categories, such as initialisation, registration, generation of key, data access control, authorisation, authorisation revocation and data protection. Also, registration is done in three phases, like registration of cloud, registration of user and resource publishing. Moreover, SSFHO is the integration among Shuffled Shepherd Optimization Algorithm (SSOA) and Fire Hawk Optimization (FHO) algorithm. The performance of the model is done with various performance metrics like, authorisation time, privacy rate, and memory. These revealed high values of privacy rate of 0.93, low values of authorisation time and memory of 0.559 and 0.843 GHz.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-23T08:00:00Z
DOI: 10.1142/S021964922450014X
-
- Handling Massive Sparse Data in Recommendation Systems
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: V. Lakshmi Chetana, Hari Seetha
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Collaborative filtering-based recommendation systems have become significant in various domains due to their ability to provide personalised recommendations. In e-commerce, these systems analyse the browsing history and purchase patterns of users to recommend items. In the entertainment industry, collaborative filtering helps platforms like Netflix and Spotify recommend movies, shows and songs based on users’ past preferences and ratings. This technology also finds significance in online education, where it assists in suggesting relevant courses and learning materials based on a user’s interests and previous learning behaviour. Even though much research has been done in this domain, the problems of sparsity and scalability in collaborative filtering still exist. Data sparsity refers to too few preferences of users on items, and hence it would be difficult to understand users’ preferences. Recommendation systems must keep users engaged with fast responses, and hence there is a challenge in handling large data as these days it is growing quickly. Sparsity affects the recommendation accuracy, while scalability influences the complexity of processing the recommendations. The motivation behind the paper is to design an efficient algorithm to address the sparsity and scalability problems, which in turn provide a better user experience and increased user satisfaction. This paper proposes two separate, novel approaches that deal with both problems. In the first approach, an improved autoencoder is used to address sparsity, and later, its outcome is processed in a parallel and distributed manner using a MapReduce-based [math]-means clustering algorithm with the Elbow method. Since the [math]-means clustering technique uses a predetermined number of clusters, it may not improve accuracy. So, the elbow method identifies the optimal number of clusters for the [math]-means algorithm. In the second approach, a MapReduce-based Gaussian Mixture Model (GMM) with Expectation-Maximization (EM) is proposed to handle large volumes of sparse data. Both the proposed algorithms are implemented using MovieLens 20M and Netflix movie recommendation datasets to generate movie recommendations and are compared with the other state-of-the-art approaches. For comparison, metrics like RMSE, MAE, precision, recall, and F-score are used. The outcomes demonstrate that the second proposed strategy outperformed state-of-the-art approaches.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-23T08:00:00Z
DOI: 10.1142/S0219649224500217
-
- Design and Implementation of Data Management and Visualisation Module in
Financial Digital Management-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Junying Ren
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Enterprise financial data is the key indicator of enterprise development, which provides the important basis for management to analyse and make decisions. Therefore, the provision of reliable and effective information services to enterprises through visualisation technology has become an urgent problem to be solved in the construction of enterprise informatisation. At present, the common data statistics and visualisation tools in the market are difficult to meet the needs of specialised financial enterprises for data analysis. Additionally, the current financial management system has several issues, including an abundance of data and lack of observation suitability. Aiming at the deficiency of data management function in the system, this paper studies the improvement design of data management and visualisation module in financial digital management. First, [math]-means clustering algorithm and C4.5 decision tree algorithm are selected to improve the financial data management system. Then, through the existing hierarchical data visualisation scheme, the node link method, space filling method and Sankey chart are proposed to display the changes of financial data. Finally, the data management and visualisation module and the corresponding algorithm flow are designed. The experiment indicates a contour coefficient of 0.53 for the performance evaluation model based on the [math]-means algorithm, indicating a satisfactory clustering result. The employee violation prediction model, based on the C4.5 decision tree algorithm, exhibits a high prediction accuracy of 92.35% for the training dataset, demonstrating its effectiveness in predicting employee violations. The data rendering accuracy of the visualised tool is 98.46%, significantly surpassing that of traditional visualisation tools. At the same time, its visual effect and operation are better than traditional tools. Compared with the traditional data visualisation system, this research method improves the efficiency of enterprise financial data management, converts complicated financial data into graphics that are easier for people to understand, realises visualisation, and effectively reduces the management cost of financial operations.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-23T08:00:00Z
DOI: 10.1142/S0219649224500230
-
- Evaluation of the Quality of Sustainable Entrepreneurship Education in
Universities Based on the Grey Correlation Algorithm-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Chen Li, Zhiyuan Sun
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
In order to strengthen the evaluation of the quality of sustainable entrepreneurship education in universities, a study was conducted to calculate the correlation degree of the quality of sustainable entrepreneurship education in universities based on the grey correlation algorithm. The study also constructs an evaluation index system and evaluation model from three aspects: the foundation of the entrepreneurship education environment, the allocation of entrepreneurship education resources and the performance of starting an undertaking education result. The result that the evaluation model is more similar to the actual situation indicates that the evaluation indexes have a certain degree of scientificity. The evaluation method of starting an undertaking sustainable education in C&U based on the grey correlation degree algorithm has higher accuracy, recall, and F1 value compared with the traditional way. Therefore, the method has a better evaluation effect for the quality evaluation of entrepreneurship sustainable education in C&U.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-23T08:00:00Z
DOI: 10.1142/S0219649224500266
-
- Framework for the Generation of Tourist Experiences
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lyda Jovanna Rueda Caicedo, Leonardo Bermón Angarita, Marcelo López Trujillo
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
The tourism sector requires tools that allow for the creation of tourist experiences, based on the knowledge gleaned from service encounters between frontline employees and tourists. The objective of this study is to propose a framework, based on knowledge management, for the generation of tourist experiences. The study was carried out in three stages: first, a literature review related to tourist experiences, knowledge management, service encounters, and frontline employees was conducted. Next, a BPMN-based framework was designed. Finally, the proposed framework was validated through surveys with tourism experts from a cluster of tourism companies. The framework provides conceptual guidelines with useful worksheets for entrepreneurs that form part of tourism clusters, with which to generate tourist experiences.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-23T08:00:00Z
DOI: 10.1142/S021964922450031X
-
- DiabNet: A Convolutional Neural Network for Diabetic Retinopathy Detection
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: S. Anitha, S. Priyanka
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Diabetic retinopathy is a leading cause of blindness among diabetic patients, and early detection is crucial. This research proposes DiabNet, a novel convolutional neural network (CNN) architecture designed to enhance the accuracy, efficiency, and robustness of diabetic retinopathy detection from retinal images. DiabNet incorporates unique features like skip connections, attention mechanisms, and batch normalisation to improve feature extraction. The paper details DiabNet’s architecture, feature extraction, and training process. Evaluation on a standard dataset shows that DiabNet surpasses existing methods in accuracy, efficiency, and robustness. The research also explores the interpretability of DiabNet and suggests future research directions. The potential impact of DiabNet includes improved early detection and management of diabetic retinopathy. In addition, DiabNet’s deployment as a mobile app enables convenient and accessible diabetic retinopathy screening. Finally, it is noted that DiabNet, as a mobile app, has the potential to significantly impact the field of diabetic retinopathy detection, leading to improved early detection of diabetic retinopathy. The experimental validation proves that the proposed DiabNet architecture is feasible for real-time deployment yielding an accuracy of 98.72%.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-15T08:00:00Z
DOI: 10.1142/S0219649224500308
-
- Sentiment Analysis Using Deep Learning Approaches on Multi-Domain Dataset
in Telugu Language-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kannaiah Chattu, D. Sumathi
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Recent advancements in Natural Language Processing (NLP) have made sentiment analysis an essential component of a variety of NLP jobs, including recommendation systems, question answering, and business intelligence products. While sentiment analysis research has, to put it mildly, been widely pursued in English, Telugu has barely ever attempted the task. The majority of research works concentrate on analysing the sentiments of Tweets, news, or reviews containing Hindi and English words. There is a growing interest among academics in studying how people express their thoughts and views in Indian languages like Bengali, Telugu, Malayalam, Tamil and so on. Due to a paucity of labelled datasets, microscopic investigation on Indian languages has been published to our knowledge. This work suggested a sentence-level sentiment analysis on multi-domain datasets that has been collected in Telugu. Deep learning models have been used in this work because it demonstrates the significant expertise in sentiment analysis and is widely regarded as the cutting-edge model in Telugu Sentiment Analysis. Our proposed work investigates a productive Bidirectional Long Short-Term Memory (BiLSTM) Network and Bidirectional GRU Network (BiGRU) for improving Telugu Sentiment Analysis by encapsulating contextual information from Telugu feature sequences using Forward-Backward encapsulation. Further, the model has been deployed by merging the domains so as to predict the accuracy and other performance metrics. The experimental test findings show that the deep learning models outperform when compared with the baseline traditional ML methods in four benchmark sentiment analysis datasets. There is evidence that the proposed sentiment analysis method has improved precision, recall, F1-score and accuracy in certain cases. The proposed model has achieved the F1-score of 86% for song datasets when compared with the other existing models.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-10T08:00:00Z
DOI: 10.1142/S0219649224500187
-
- Untapped Location Discovery on Social Media by Combining Geospatial
Clustering with Natural Language Processing-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Siddharth Mehta, Gautam Jain, Shuchi Mala
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
This methodology combines geospatial clustering and Natural Language Processing (NLP) to create a framework for discovering unexplored geotags in social media. The framework contains the collection of data from social media platforms, the preprocessing of data with Pandas, Natural Language Toolkit (NLTK) and SpaCy libraries for the NLP analysis as well as for sentiment analysis and named entity recognition, followed by spatial clustering with Density-based Space Clustering of Noise Applications (DBSCAN), K-Means and HDBSCAN algorithms, then visualising with Matplotlib and Folium libraries. The data analysis and statistics were done using Pandas and NumPy libraries, and exploration through the selection and collection of more data based on the previous step. In addition, a prediction model has been developed to predict a location cluster using its name by comparing it to the preprocessed comma-separated values data file. Currently, there are certain locations like small-scale hospitals or unknown tourist places which are not currently tagged on available maps applications. This framework can be useful for researchers and policy makers to identify those locations and gain insights from social media data and find its potential for decision-making in various fields.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-03T08:00:00Z
DOI: 10.1142/S0219649224500254
-
- Correlation between Vaccination and Child Mortality Rate Using
Multivariate Linear Regression Model-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: A. Revathi, R. Kaladevi, M. Vimaladevi, S. Hariharan, A. K. Cherukuri, R. Sujatha
Abstract: Journal of Information & Knowledge Management, Ahead of Print.
Population has increased drastically over the years and new diseases compete with the population. Immunisation is a preventive measure, which makes the person resistant or immune to the disease. Vaccination stimulates our own immune system against infection or diseases. Vaccines are available for more than twenty life-threatening diseases and it saves millions of lives throughout the world. In the 70th World Assembly conducted in the year 2017, around 194 countries participated and took the oath to strengthen vaccination thereby achieving goals of Global Vaccine Action Plan (GVAP). In spite of remarkable immunisation progress, approximately 20 million infants are not exposed to vaccination every year. The immunisation progress has stalled or even reversed in some countries, and there is a real risk that complacency will undermine past achievements. This paper considered the database of vaccine consumption rate from many countries issued by WHO to analyse the reason for poor access to vaccine with respect to the morality and poverty levels. For this analysis, the relation between vaccine consumption by children below five years, the children’s death rate record issued by the United Nations Children’s Education Fund (UNICEF), poverty index issued by the United Nations are considered. Multivariate linear regression algorithm is used to identify the correlation between datasets. The result shows that an increase in vaccination coverage results in reduction on mortality rate in most of the countries. A correlation coefficient of 0.7 was found between IMR and the vaccine dosages. Sub-Sahara countries’ poverty index has direct impact on the declined view of vaccination coverage.
Citation: Journal of Information & Knowledge Management
PubDate: 2024-01-03T08:00:00Z
DOI: 10.1142/S0219649224500278
-