Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In a wide variety of applications, such as indoor position selection for advertising and setting rents of different shops in a shopping mall, it is better to get the passenger flow of each room. In the indoor space, the positions of users are commonly captured by the indoor positioning system consisting of static positioning devices. And the sequence of all tracking events with the same user ordered by the corresponding time is the indoor trajectory of this user. Thus, in this paper, we define and study two essential queries named Rooms with top-k passenger flows at a Timestamp query (RkT for short) and Rooms with top-k passenger flows within a time Interval query (RkI for short), i.e., how to search rooms with top-k passenger flows at a timestamp and within a time interval in the past using indoor trajectories, respectively. For the indoor positioning system, there are only limited static positioning devices deployed in the indoor space on account of the cost. And the detection ranges of these static positioning devices only cover a small part of the indoor space. When a user is in the undetected state, there is uncertainty in its position combined with the quite complex indoor topology. Such uncertainty brings great challenges to determining the passenger flow in each room. Considering the distribution of static positioning devices, we propose a new method about how to reasonably infer where a user is in the undetected state and the corresponding probability based on its indoor trajectory and the complex indoor topology. In order to quickly retrieve the set of indoor trajectories, we propose a full Binary tree indexing indoor trajectories divided by Time intervals (BiT for short), which is built on the given set of indoor trajectories. Based on the index BiT, we propose PAT Algorithm and PAI Algorithm to efficiently process RkT and RkI queries, respectively. Extensive experiment results demonstrate superior performance of PAT Algorithm and PAI Algorithm. PubDate: 2024-08-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Polycystic ovary syndrome (PCOS) is an endocrine disorder affecting women of reproductive age characterized by the presence of multiple follicles in the ovaries that are detectable via ultrasound imaging. Early diagnosis of PCOS morphology can be challenging due to low resolution and increased speckle noise, making it difficult to identify smaller follicle boundaries. This article introduces a novel methodology, multiscale gradient-weighted oriented Otsu thresholding with sum of product fusion (MOT-SF), to address these challenges. The MOT-SF technique precisely recognizes smaller region boundaries even at lower resolutions by employing a pyramidal structure for image computation at multiple scales. Otsu's thresholding is used to segment the image, optimizing the threshold by minimizing the interclass variance at each stage. Incorporating gradient weights (λ) within classes enhances smaller boundary regions and reduces noise. Additionally, the MOT-SF method integrates a sum of product fusion strategies, combining segmented images from various scales to produce a final image that preserves both small and large PCOS structures while mitigating noise. The experimental results show that MOT-SF outperforms traditional methods such as Otsu’s thresholding and Chan-Vese models, as well as deep learning approaches such as R-CNN, in terms of computational efficiency and robustness to variations in ultrasound image quality. The MOT-SF methodology achieves an accuracy of nearly 85% and a precision of 94%, highlighting its potential to improve the detection and characterization of follicles in ultrasound images and advancing diagnostic tools in reproductive health. Graphical PubDate: 2024-08-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The key to improving the performance of graph convolutional networks (GCN) is to fully explore the correlation between neighboring and distant information. Aiming at the over-smoothing problem of GCN, in order to make full use of the relationship among features, graphs and labels, a graph residual generation network based on multi-information aggregation (MIA-GRGN) is proposed. Firstly, aiming at the defects of GCN, we design a deep initial residual graph convolution network (DIRGCN), which connects the initial input through residuals, so that each layer node retains part of the information of the initial features, ensuring the localization of the graph structure and effectively alleviating the problem of over-smoothing. Secondly, we propose a random graph generation method (RGGM) by utilizing graph edge sampling and negative edge sampling, and optimize the supervision loss function of DIRGCN in the form of generation framework. Finally, applying RGGM and DIRGCN as inference modules for modeling hypotheses and obtaining approximate posterior distributions of unknown labels, an optimized loss function is obtained, we construct a multi-information aggregation MIA-GRGN that combines graph structure, node characteristics and label joint distribution. Experiments on benchmark graph classification datasets show that MIA-GRGN achieves better classification results compared with the benchmark models and mainstream models, especially for datasets with less dense edge relationships between nodes. PubDate: 2024-08-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The amount of information is multiplying, one of the popular and widely used formats is short videos. Therefore, maintaining the copyright protection of this information, preventing it from being disclosed without authorization, is a challenge. This work presents a way to rank a set of short videos based on a video profile similarity metric, finding a set of reference videos, using a self-supervised method, without the need for human tagging. The self-supervised method uses a search based on a Genetic Algorithm, of a subgroup of the most similar videos. Similarities are calculated using the SMAPE metric on video signatures vectors, generated with a fixed size, using Structural Tensor, maximum sub matrix and T-SNE. PubDate: 2024-08-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This scoping review aimed to improve the understanding of important factors in digital accessibility for people with visual impairment and blindness, focusing on the first-hand experiences and challenges faced by this target group while using digital technologies. Keywords related to ‘digital technologies,’ ‘accessibility,’ ‘visual impairment,’ and ‘blindness’ were used in searching two databases (n = 683), with additional articles identified by means of manual searches (n = 60). Two reviewers independently screened the titles and abstracts to select 97 articles for full-text screening, of which 49 articles met the inclusion criteria and were selected for review based on the WCAG guidelines, highlighting details for consideration and improvement of the guidelines. The analysis revealed that users suffered from inaccessibility in several ways. For example, many applications or websites are developed for sighted users, where information is communicated through visual content without providing alternatives for assistive technology users. In addition, the lack of keyboard accessibility, shortcuts, or compatibility with different assistive technologies remains a consistent challenge. Furthermore, it was highlighted that simple accessibility measures are not followed adequately or consistently, such as providing alternative text for images or labels for links and buttons. This review highlighted the challenges and consequences of the inaccessibility of digital technologies, providing a detailed explanation regarding the elements that should be considered in the development of digital technologies. It is recommended that people with disabilities should be involved in the design of technology and its accessibility assessment. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In an era marked by rapid digital transformation, understanding the factors that influence digital engagement is crucial for bridging the digital divide. This study aims to explore the impact of individual factors such as networking motive, social media use, content service usage, and economic activity on digital transformation awareness and satisfaction. Utilizing Partial Least Squares Structural Equation Modeling (PLS-SEM) to analyze data from 7,000 respondents of the National Information Society Agency (NIA)'s 2022 Digital Divide Survey, this research provides empirical insights into the dynamics of digital engagement. The findings reveal that networking motive significantly predicts social media use, which in turn slightly enhances digital transformation awareness but not satisfaction. Conversely, economic activity positively influences both awareness and satisfaction with digital transformation, underscoring the tangible benefits of digital economic engagement. Life service utilization emerged as a crucial factor, significantly impacting both awareness and satisfaction. These results offer critical implications for policymakers, educators, and digital platform developers, suggesting the need for targeted strategies to enhance digital literacy, promote inclusive digital services, and foster economic opportunities in the digital domain. PubDate: 2024-07-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Zigbee is a wireless network technology that operates on a community-based infrastructure. The primary objective of this system is to allow for the effective and inexpensive transmission of information across intelligent devices, focusing on minimizing latency and addressing potential challenges. The prominence of security risks related to the Zigbee community has escalated due to the expanded implementation of Zigbee technology. The Zigbee community has implemented measures to mitigate network disruptions, prevent unauthorized access, and protect against the interception of statistical data. A plethora of encryption algorithms have been developed. The encryption technologies employed encompass a 128-bit Advanced Encryption Standard (AES) to encrypt data during transfer. Cryptographically safe protocols (CSPs) are implemented to ensure safe connections and authentication. This research introduces an innovative methodology for assessing the security of Zigbee community-based Internet of Things (IoT) systems. This article thoroughly examines the security concerns associated with Zigbee technology. Adopting intelligent design concepts in the construction of IoT devices facilitates the implementation of robust security measures and reliable detection systems to mitigate potential assaults. The system can implement access control mechanisms based on authentication, guaranteeing the secure management of devices, networks, and protocols and facilitating the secure installation of software updates. PubDate: 2024-07-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The wireless sensor network (WSN) is considered as a network, encompassing small-embedded devices named sensors that are wirelessly connected to one another for data forwarding within the network. These sensor nodes (SNs) follow an ad-hoc configuration and are connected with the Base Station (BS) through the internet for data sharing. When more amounts of data are shared from several SNs, traffic arises within the network, and controlling and balancing the traffic loads (TLs) are significant. The TLs are the amount of data shared by the network in a given time. Balancing these loads will extend the network’s lifetime and reduce the energy consumption (EC) rate of SNs. Thus, the Load Balancing (LB) within the network is very efficient for the network’s energy optimization (EO). However, this EO is the major challenging part of WSN. Several existing research concentrated and worked on energy-efficient LB optimization to prolong the lifetime of the WSN. Therefore, this review collectively presents a detailed survey of the linear programming (LP)-based optimization models and alternative optimization models for energy-efficient LB in WSN. LP is a technique used to maximize or minimize the linear function, which is subjected to linear constraints. The LP methods are utilized for modeling the features, deploying, and locating the sensors in WSN. The analysis proved the efficacy of the developed model based on its fault tolerance rate, latency, topological changes, and EC rates. Thus, this survey briefly explained the pros and cons of the developed load-balancing schemes for EO in WSN. PubDate: 2024-07-17
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This study investigates the transformative influence of artificial intelligence (AI) on accounting procedures in Saudi Arabia by examining accounting professionals’ attitudes, understanding, and practices regarding AI implementation. Using a questionnaire-based survey distributed among accounting professionals in Saudi Arabia, data analysis was conducted using the partial least squares (PLS) technique. We find significant direct relationships between AI awareness and usage, AI engagement and accountants, and the impact of AI and accounting procedures. This finding suggests that accountants who are knowledgeable about and utilize AI are more likely to be engaged in AI, leading to positive changes in accounting procedures. Moreover, the robust positive relationship between AI’s impact on accounting procedures and accounting efficiency indicates a significant positive influence. The outcomes revealed that AI engagement and impact played significant mediating roles in these relationships. These findings suggest that, while AI awareness and usage alone can lead to improved accounting outcomes, the effect is mediated by the level of engagement with AI and its impact. We provide compelling evidence that AI positively affects the accounting profession. Accountants who are aware of and use AI are more likely to engage in and experience positive changes in their accounting procedures. We provide theoretical insights into the influence of AI on accounting procedures, offer valuable resources to academics, and suggest areas for future research. It equips practitioners with effective strategies for incorporating AI into their practices, highlighting the harnessing of AI’s transformative potential of AI while addressing potential challenges. PubDate: 2024-07-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The task of aspect-level sentiment analysis is intricately designed to determine the sentiment polarity directed towards a specific target within a sentence. With the increasing availability of online reviews and the growing importance of healthcare decisions, analyzing drug reviews has become a critical task. Traditional sentiment analysis, which categorizes a whole review as positive, negative, or neutral, provides limited insights for consumers and healthcare professionals. Aspect-based sentiment analysis (ABSA) aims to overcome these limitations by identifying and evaluating the sentiment associated with specific aspects or attributes of drugs mentioned in the reviews. Various fields, including business, politics, and medicine, have been explored in the context of sentiment analysis. Automation of online user reviews allows pharmaceutical companies to assess large amounts of user feedback. This helps extract pharmacological efficacy and side effect insights. The data collected could improve pharmacovigilance. Reviewing user comments can provide valuable data that can be used to improve drug safety and efficacy monitoring procedures. This improves pharmacovigilance processes, improving pharmaceutical outcomes understanding and corporate decision-making. Therefore, we propose a pre-trained RoBERTa with a Bi-LSTM model to categorise drug reviews from online sources and pre-process the text data. Ant Colony Optimization can be used in feature selection for ABSA, helping to identify the most relevant aspects and sentiments. Further, RoBERTa is fine-tuned to perform ABSA on the dataset, enabling the system to categorize aspects and determine the associated sentiment. The outcomes reveal that the suggested framework has achieved higher accuracy (96.78%) and F1 score (98.29%) on druglib.com, and 95.02% on the drugs.com dataset, than several prior state-of-the-art methods. PubDate: 2024-07-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The development of threshold protocols based on lattice-signature schemes has been of increasing interest in the past several years. The main research focus has been towards protocols constructed for various variants of Crystals-Dilithium, future NIST digital signature standard known as ML-DSA. In this work, we propose TOPCOAT, a two-party lattice-based signature algorithm that embodies Dilithium’s compression techniques. The aforesaid result is achieved by introducing a new hinting mechanism that allows parties to collaboratively calculate \(\textsf {HighBits}\) . Our hinting mechanism allows public key compression similar to Dilithium. Additionally, we suggest an optimization technique to minimize number of restarts both parties need to produce a valid signature. Our approach allows to produce \(\approx 10\) KB signatures within 3 rounds of communication. We prove security of our scheme under MLWE and MSIS assumptions in ROM, and provide implementation of our proposed scheme. As additional contribution, we present vulnerabilities and inconsistencies found in Liu et al. work (Future Generation Computer Systems 2023) which aimed to construct distributed lattice-based signature protocol. PubDate: 2024-07-10
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Although Ghana does not have one unique language for its citizens, the Twi dialect stands a chance of fulfilling this purpose. Twi is among the low-resourced language categories, yet it is widely spoken beyond Ghana and in countries such as the Ivory Coast, Benin, Nigeria, and other places. However, it continues to be seen as the perfect resource for Twi Machine Translation (MT) of IS0 639-3. The issue with the Twi-English parallel corpus is eminent at the multiple domain dataset level, partly due to the complex design structure and scarcity of the digital Twi lexicon. This study introduced Twi-2-ENG, a large-scale multiple domain Twi to English parallel corpus, Twi digital Dictionary, and lexicon version of Twi. Also, it employed the Ghanaian Parliamentary Hansards, crowdsourcing, and digital Ghana News Portals to crawl all the English sentences. Our curled news portals accumulated 5,765 parallel corpus sentences, the Twi New Testament Bible, and social media platforms. The data-gathering method used means of translation, compilation, tokenization, and the final alignments with the Twi-English parallel sentences, including the technology employed in compiling and hosting the corpus, were duly discussed. The results reveal that the role of manually qualified linguistic professionals and Twi translation specialists across the media spectrum, academia, and well-wishers adds a considerable volume to the Twi-2-ENG parallel corpus. Finally, all the sentences were curated with the help of a corpus manager, sketch engine, linguistics, and professional translators to align and tokenize all texts, allowing the Twi professional linguists to evaluate the corpus. PubDate: 2024-07-05
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we formulate a novel Point-of-interest (POI) recommendation task to recommend a set of new POIs for visit in a short period following recent check-ins, named short-term POI recommendation. It differs from previously studied tasks and poses new challenges, such as modeling high-order POI transitions in a short period. We present PTWLR, a personalized time-weighted latent ranking model that jointly learns short-term POI transitions and user preferences with our proposed temporal weighting scheme to capture the temporal context of transitions. We extend our model to accommodate the transition dependencies on multiple recent check-ins. In experiments on real-world datasets, our model consistently outperforms seven widely used methods by significant margins in various contexts, demonstrating its effectiveness on our task. Further analysis shows that all proposed components contribute to performance improvement. PubDate: 2024-07-03
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper we propose a novel approach for combining first-stage lexical retrieval models and Transformer-based re-rankers: we inject the relevance score of the lexical model as a token into the input of the cross-encoder re-ranker. It was shown in prior work that interpolation between the relevance score of lexical and Bidirectional Encoder Representations from Transformers (BERT) based re-rankers may not consistently result in higher effectiveness. Our idea is motivated by the finding that BERT models can capture numeric information. We compare several representations of the Best Match 25 (BM25) and Dense Passage Retrieval (DPR) scores and inject them as text in the input of four different cross-encoders. Since knowledge distillation, i.e., teacher-student training, proved to be highly effective for cross-encoder re-rankers, we additionally analyze the effect of injecting the relevance score into the student model while training the model by three larger teacher models. Evaluation on the MSMARCO Passage collection and the TREC DL collections shows that the proposed method significantly improves over all cross-encoder re-rankers as well as the common interpolation methods. We show that the improvement is consistent for all query types. We also find an improvement in exact matching capabilities over both the first-stage rankers and the cross-encoders. Our findings indicate that cross-encoder re-rankers can efficiently be improved without additional computational burden or extra steps in the pipeline by adding the output of the first-stage ranker to the model input. This effect is robust for different models and query types. PubDate: 2024-06-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We show how fixed-unitary quantum encryption schemes can be attacked in a black-box setting. We use an efficient technique to invert a unitary transformation on a quantum computer to retrieve an encrypted secret quantum state \({ {\psi }\rangle }\) . This attack has a success rate of 100% and can be executed in constant time. We name a vulnerable scheme which security is fully broken by our attack and suggest how to improve the scheme to invalidate this attack. The proposed attack highlights the importance of carefully designing quantum encryption schemes to ensure their security against quantum adversaries, even in a black-box setting. We point to the faulty assumption and name a criterion for future quantum cipher design to prevent similar vulnerabilities. PubDate: 2024-06-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract As an auto-parallelization technique with the level of thread on multi-core, Thread-Level Speculation (TLS) which is also called Speculative Multithreading (SpMT), partitions programs into multiple threads and speculatively executes them under conditions of ambiguous data and control dependence. Thread partitioning approach plays a key role to the performance enhancement in TLS. The existing heuristic rules-based approach (HR-based approach) which is an one-size-fits-all strategy, can not guarantee to achieve the satisfied thread partitioning. In this paper, an importance degree based thread partitioning approach (IDaTPA) is proposed to realize the partition of irregular programs into multithreads. IDaTPA implements biasing partitioning for every procedure with a machine learning method. It mainly includes: constructing sample set, expression of knowledge, calculation of similarity, prediction model and the partition of the irregular programs is performed by the prediction model. Using IDaTPA, the subprocedures in unseen irregular programs can obtain their satisfied partition. On a generic SpMT processor (called Prophet) to perform the performance evaluation for multithreaded programs, the IDaTPA is evaluated and averagely delivers a speedup of 1.80 upon a 4-core processor. Furthermore, in order to obtain the portability evaluation of IDaTPA, we port IDaTPA to 8-core processor and obtain a speedup of 2.82 on average. Experiment results show that IDaTPA obtains a significant speedup increasement and Olden benchmarks respectively deliver a 5.75% performance improvement on 4-core and a 6.32% performance improvement on 8-core, and SPEC2020 benchmarks obtain a 38.20% performance improvement than the conventional HR-based approach. PubDate: 2024-06-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract One of the most significant problems related to Big Data is their analysis with the use of various methods from the area of descriptive statistics or machine and deep learning. This process is interesting in both—static datasets containing various data sources which do not change over time, and dynamic datasets collected with the use of ambient data sources, which measure a number of attribute values over long periods. Since access to actual dynamic data systems is demanding, the focus of this work is put on the design and implementation of a framework usable in a simulation of data streams, their processing and subsequent dynamic predictive and visual analysis. The proposed system is experimentally verified in the context of a case study conducted on an environmental variable dataset, which was measured with the use of a real-life sensor network. PubDate: 2024-06-14 DOI: 10.1007/s10791-024-09447-4
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract With the rapid development of the Ethereum ecosystem and the increasing applications of decentralized finance (DeFi), the security research of smart contracts and blockchain transactions has attracted more and more attention. In particular, front-running attacks on the Ethereum platform have become a major security concern. These attack strategies exploit the transparency and certainty of the blockchain, enabling attackers to gain unfair economic benefits by manipulating the transaction order. This study proposes a sandwich attack detection system integrated into the go-Ethereum client (Geth). This system, by analyzing transaction data streams, effectively detects and defends against front-running and sandwich attacks. It achieves real-time analysis of transactions within blocks, quickly and effectively identifying abnormal patterns and potential attack behaviors. The system has been optimized for performance, with an average processing time of 0.442 s per block and an accuracy rate of 83%. Response time for real-time detection new blocks is within 5 s, with the majority occurring between 1 and 2 s, which is considered acceptable. Research findings indicate that as a part of the go-Ethereum client, this detection system helps enhance the security of the Ethereum blockchain, contributing to the protection of DeFi users’ private funds and the safety of smart contracts. The primary contribution of this study lies in offering an efficient blockchain transaction monitoring system, capable of accurately detecting sandwich attack transactions within blocks while maintaining normal operation speeds as a full node. PubDate: 2024-05-30 DOI: 10.1007/s10791-024-09445-6
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A primary concern in the realm of mechanical engineering is to ensure the efficient and effective data entry of hardware devices. Fasteners are mechanical tools that rigidly connect or affix two surfaces or objects together. They are small and often different fasteners might look similar; it is therefore a long and prone-to-risk procedure to manually analyze them to classify and store their related information. With the widespread diffusion of AI frameworks in several domains, equipment manufacturers started to rely on AI technologies for these heavy tasks. Automatically classifying fasteners by type and extracting metadata from natural language questions are important tasks that fastener manufacturers and suppliers encounter. In this paper, we address these challenges. To address the first task, we introduce an augmentation methodology that starts with a small set of 3D models representing each of the 21 types of fasteners we aim to classify. This methodology efficiently generates multiple 2D images from these models. Next, we train a vision transformer using the collected data to address a single-label multi-class classification task. For the second task, we introduce a prompt-engineering technique designed for conversational agents. This technique leverages in-context knowledge to extract (metadata field, value) pairs from natural language questions. Subsequently, we tackle a question-answering task to the description fields of the extracted fasteners. Our evaluation demonstrates the effectiveness of both approaches, surpassing the baselines we tested. PubDate: 2024-05-27 DOI: 10.1007/s10791-024-09443-8
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper introduces the Collection on ECIR 2023. The 45th European Conference on Information Retrieval (ECIR 2023) was held in Dublin, Ireland, during April 2–6, 2023. The conference was the largest ECIR ever, and brought together hundreds of researchers from Europe and abroad. A selection of papers shortlisted for the best paper awards was asked to submit expanded versions appearing in this Discover Computing (formerly the Information Retrieval Journal) Collection on ECIR 2023. First, an analytic paper on incorporating first stage retrieval status values as input in neural cross-encoder re-rankers. Second, new models and new data for a new task of temporal natural language inference. Third, a weak supervision approach to video retrieval overcoming the need for large-scale human labeled training data. Together, these papers showcase the breadth and diversity of current research on information retrieval. PubDate: 2024-05-23 DOI: 10.1007/s10791-024-09442-9