Subjects -> COMPUTER SCIENCE (Total: 2313 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (133 journals)
    - AUTOMATION AND ROBOTICS (116 journals)
    - CLOUD COMPUTING AND NETWORKS (75 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (12 journals)
    - COMPUTER GAMES (23 journals)
    - COMPUTER PROGRAMMING (25 journals)
    - COMPUTER SCIENCE (1305 journals)
    - COMPUTER SECURITY (59 journals)
    - DATA BASE MANAGEMENT (21 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (30 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (109 journals)
    - INTERNET (111 journals)
    - SOCIAL WEB (61 journals)
    - SOFTWARE (43 journals)
    - THEORY OF COMPUTING (10 journals)

SOFTWARE (43 journals)

Showing 1 - 41 of 41 Journals sorted alphabetically
ACM Transactions on Mathematical Software (TOMS)     Hybrid Journal   (Followers: 5)
Computing and Software for Big Science     Hybrid Journal   (Followers: 1)
IEEE Software     Full-text available via subscription   (Followers: 213)
Image Processing & Communications     Open Access   (Followers: 18)
International Free and Open Source Software Law Review     Open Access   (Followers: 6)
International Journal of Advanced Network, Monitoring and Controls     Open Access  
International Journal of Agile and Extreme Software Development     Hybrid Journal   (Followers: 5)
International Journal of Computer Vision and Image Processing     Full-text available via subscription   (Followers: 18)
International Journal of Forensic Software Engineering     Hybrid Journal  
International Journal of Open Source Software and Processes     Full-text available via subscription   (Followers: 3)
International Journal of People-Oriented Programming     Full-text available via subscription  
International Journal of Secure Software Engineering     Full-text available via subscription   (Followers: 6)
International Journal of Soft Computing and Software Engineering     Open Access   (Followers: 14)
International Journal of Software Engineering Research and Practices     Open Access   (Followers: 13)
International Journal of Software Engineering, Technology and Applications     Hybrid Journal   (Followers: 4)
International Journal of Software Innovation     Full-text available via subscription   (Followers: 1)
International Journal of Software Science and Computational Intelligence     Full-text available via subscription   (Followers: 1)
International Journal of Systems and Software Security and Protection     Hybrid Journal   (Followers: 1)
International Journal of Web Portals     Full-text available via subscription   (Followers: 17)
International Journal of Web Services Research     Full-text available via subscription  
Journal of Communications Software and Systems     Open Access   (Followers: 1)
Journal of Database Management     Full-text available via subscription   (Followers: 8)
Journal of Information Systems Engineering and Business Intelligence     Open Access  
Journal of Information Technology     Hybrid Journal   (Followers: 56)
Journal of Open Research Software     Open Access   (Followers: 4)
Journal of Software Engineering and Applications     Open Access   (Followers: 12)
Journal of Software Engineering Research and Development     Open Access   (Followers: 10)
Press Start     Open Access   (Followers: 1)
Python Papers     Open Access   (Followers: 13)
Python Papers Monograph     Open Access   (Followers: 6)
Python Papers Source Codes     Open Access   (Followers: 11)
Scientific Phone Apps and Mobile Devices     Open Access  
SIGLOG news     Full-text available via subscription  
Software Engineering     Open Access   (Followers: 32)
Software Engineering     Full-text available via subscription   (Followers: 6)
Software Impacts     Open Access   (Followers: 1)
SoftwareX     Open Access   (Followers: 1)
Synthesis Lectures on Algorithms and Software in Engineering     Full-text available via subscription   (Followers: 2)
Synthesis Lectures on Software Engineering     Full-text available via subscription   (Followers: 3)
Transactions on Software Engineering and Methodology     Full-text available via subscription   (Followers: 8)
VFAST Transactions on Software Engineering     Open Access  
Similar Journals
Journal Cover
Computing and Software for Big Science
Number of Followers: 1  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 2510-2036 - ISSN (Online) 2510-2044
Published by Springer-Verlag Homepage  [2468 journals]
  • Jet Energy Calibration with Deep Learning as a Kubeflow Pipeline

    • Free pre-print version: Loading...

      Abstract: Abstract Precise measurements of the energy of jets emerging from particle collisions at the LHC are essential for a vast majority of physics searches at the CMS experiment. In this study, we leverage well-established deep learning models for point clouds and CMS open data to improve the energy calibration of particle jets. To enable production-ready machine learning based jet energy calibration an end-to-end pipeline is built on the Kubeflow cloud platform. The pipeline allowed us to scale up our hyperparameter tuning experiments on cloud resources, and serve optimal models as REST endpoints. We present the results of the parameter tuning process and analyze the performance of the served models in terms of inference time and overhead, providing insights for future work in this direction. The study also demonstrates improvements in both flavor dependence and resolution of the energy response when compared to the standard jet energy corrections baseline.
      PubDate: 2023-08-23
       
  • Convergent Approaches to AI Explainability for HEP Muonic Particles
           Pattern Recognition

    • Free pre-print version: Loading...

      Abstract: Abstract Neural networks are commonly defined as ‘black-box’ models, meaning that the mechanism describing how they give predictions and perform decisions is not immediately clear or even understandable by humans. Therefore, Explainable Artificial Intelligence (xAI) aims at overcoming such limitation by providing explanations to Machine Learning (ML) algorithms and, consequently, making their outcomes reliable for users. However, different xAI methods may provide different explanations, both from a quantitative and a qualitative point of view, and the heterogeneity of approaches makes it difficult for a domain expert to select and interpret their result. In this work, we consider this issue in the context of a high-energy physics (HEP) use-case concerning muonic motion. In particular, we explored an array of xAI methods based on different approaches, and we tested their capabilities in our use-case. As a result, we obtained an array of potentially easy-to-understand and human-readable explanations of models’ predictions, and for each of them we describe strengths and drawbacks in this particular scenario, providing an interesting atlas on the convergent application of multiple xAI algorithms in a realistic context.
      PubDate: 2023-08-03
       
  • Lightweight Integration of a Data Cache for Opportunistic Usage of HPC
           Resources in HEP Workflows

    • Free pre-print version: Loading...

      Abstract: Abstract A data caching setup has been implemented for the High Energy Physics (HEP) computing infrastructure in Freiburg, Germany, as a possible alternative to local long-term storage. Files are automatically cached on disk upon first request by a client, can be accessed from cache for subsequent requests, and are deleted after predefined conditions are met. The required components are provided to a dedicated HEP cluster, and, via virtual research environments, to the opportunistically used High-Performance Computing (HPC) Cluster NEMO (Neuroscience, Elementary Particle Physics, Microsystems Engineering and Materials Science). A typical HEP workflow has been implemented as benchmark test to identify any overhead introduced by the caching setup with respect to direct, non-cached data access, and to compare the performance of cached and non-cached access to several external files sources. The results indicate no significant overhead in the workflow and faster file access with the caching setup, especially for geographically distant file sources. Additionally, the hardware requirements for various numbers of parallel file requests were measured for estimating future requirements.
      PubDate: 2023-07-05
      DOI: 10.1007/s41781-023-00100-1
       
  • Ntuple Wizard: An Application to Access Large-Scale Open Data from LHCb

    • Free pre-print version: Loading...

      Abstract: Abstract Making the large data sets collected at the Large Hadron Collider (LHC) accessible to the world is a considerable challenge because of both the complexity and the volume of data. This paper presents the Ntuple Wizard, an application that leverages the existing computing infrastructure available to the LHCb collaboration in order to enable third-party users to request specific data. An intuitive web interface allows the discovery of accessible data sets and guides the user through the process of specifying a configuration-based request. The application allows for fine-grained control of the level of access granted to the public.
      PubDate: 2023-06-14
      DOI: 10.1007/s41781-023-00099-5
       
  • Snowmass 2021 Computational Frontier CompF4 Topical Group Report Storage
           and Processing Resource Access

    • Free pre-print version: Loading...

      PubDate: 2023-04-26
      DOI: 10.1007/s41781-023-00097-7
       
  • Parametric Optimization on HPC Clusters with Geneva

    • Free pre-print version: Loading...

      Abstract: Abstract Many challenges of today’s science are parametric optimization problems that are extremely complex and computationally intensive to calculate. At the same time, the hardware for high-performance computing is becoming increasingly powerful. Geneva is a framework for parallel optimization of large-scale problems with highly nonlinear quality surfaces in grid and cloud environments. To harness the immense computing power of high-performance computing clusters, we have developed a new networking component for Geneva—the so-called MPI Consumer—which makes Geneva suitable for HPC. Geneva is most prominent for its evolutionary algorithm, which requires repeatedly evaluating a user-defined cost function. The MPI Consumer parallelizes the computation of the candidate solutions’ cost functions by sending them to remote cluster nodes. By using an advanced multithreading mechanism on the master node and by using asynchronous requests on the worker nodes, the MPI Consumer is highly scalable. Additionally, it provides fault tolerance, which is usually not the case for MPI programs but becomes increasingly important for HPC. Moreover, the MPI Consumer provides a framework for the intuitive implementation of fine-grained parallelization of the cost function. Since the MPI Consumer conforms to the standard paradigm of HPC programs, it vastly improves Geneva’s user-friendliness on HPC clusters. This article gives insight into Geneva’s general system architecture and the system design of the MPI Consumer as well as the underlying concepts. Geneva—including the novel MPI Consumer—is publicly available as an open source project on GitHub (https://github.com/gemfony/geneva) and is currently used for fundamental physics research at GSI in Darmstadt, Germany.
      PubDate: 2023-04-21
      DOI: 10.1007/s41781-023-00098-6
       
  • Fast Columnar Physics Analyses of Terabyte-Scale LHC Data on a Cache-Aware
           Dask Cluster

    • Free pre-print version: Loading...

      Abstract: Abstract The development of an LHC physics analysis involves numerous investigations that require the repeated processing of terabytes of data. Thus, a rapid completion of each of these analysis cycles is central to mastering the science project. We present a solution to efficiently handle and accelerate physics analyses on small-size institute clusters. Our solution uses three key concepts: vectorized processing of collision events, the “MapReduce” paradigm for scaling out on computing clusters, and efficiently utilized SSD caching to reduce latencies in IO operations. This work focuses on the latter key concept, its underlying mechanism, and its implementation. Using simulations from a Higgs pair production physics analysis as an example, we achieve an improvement factor of 6.3 in the runtime for reading all input data after one cycle and even an overall speedup of a factor of 14.9 after 10 cycles, reducing the runtime from hours to minutes.
      PubDate: 2023-03-20
      DOI: 10.1007/s41781-023-00095-9
       
  • The ATLAS EventIndex

    • Free pre-print version: Loading...

      Abstract: Abstract The ATLAS EventIndex system comprises the catalogue of all events collected, processed or generated by the ATLAS experiment at the CERN LHC accelerator, and all associated software tools to collect, store and query this information. ATLAS records several billion particle interactions every year of operation, processes them for analysis and generates even larger simulated data samples; a global catalogue is needed to keep track of the location of each event record and be able to search and retrieve specific events for in-depth investigations. Each EventIndex record includes summary information on the event itself and the pointers to the files containing the full event. Most components of the EventIndex system are implemented using BigData free and open-source software. This paper describes the architectural choices and their evolution in time, as well as the past, current and foreseen future implementations of all EventIndex components.
      PubDate: 2023-03-11
      DOI: 10.1007/s41781-023-00096-8
       
  • The Tracking Machine Learning Challenge: Throughput Phase

    • Free pre-print version: Loading...

      Abstract: Abstract This paper reports on the second “Throughput” phase of the Tracking Machine Learning (TrackML) challenge on the Codalab platform. As in the first “Accuracy” phase, the participants had to solve a difficult experimental problem linked to tracking accurately the trajectory of particles as e.g. created at the Large Hadron Collider (LHC): given \(O(10^5)\) points, the participants had to connect them into \(O(10^4)\) individual groups that represent the particle trajectories which are approximated helical. While in the first phase only the accuracy mattered, the goal of this second phase was a compromise between the accuracy and the speed of inference. Both were measured on the Codalab platform where the participants had to upload their software. The best three participants had solutions with good accuracy and speed an order of magnitude faster than the state of the art when the challenge was designed. Although the core algorithms were less diverse than in the first phase, a diversity of techniques have been used and are described in this paper. The performance of the algorithms is analysed in depth and lessons derived.
      PubDate: 2023-02-13
      DOI: 10.1007/s41781-023-00094-w
       
  • Cait: Analysis Toolkit for Cryogenic Particle Detectors in Python

    • Free pre-print version: Loading...

      Abstract: Abstract Cryogenic solid state detectors are widely used in dark matter and neutrino experiments, and require a sensible raw data analysis. For this purpose, we present Cait, an open source Python package with all essential methods for the analysis of detector modules fully integrable with the Python ecosystem for scientific computing and machine learning. It comes with methods for triggering of events from continuously sampled streams, identification of particle recoils and artifacts in a low signal-to-noise ratio environment, the reconstruction of deposited energies, and the simulation of a variety of typical event types. Furthermore, by connecting Cait with existing machine learning frameworks we introduce novel methods for better automation in data cleaning and background rejection.
      PubDate: 2022-12-17
      DOI: 10.1007/s41781-022-00092-4
       
  • Simulation of Dielectric Axion Haloscopes with Deep Neural Networks: A
           Proof-of-Principle

    • Free pre-print version: Loading...

      Abstract: Abstract Dielectric axion haloscopes, such as the Madmax experiment, are promising concepts for the direct search for dark matter axions. A reliable simulation is a fundamental requirement for the successful realisation of the experiments. Due to the complexity of the simulations, the demands on computing resources can quickly become prohibitive. In this paper, we show for the first time that modern deep learning techniques can be applied to aid the simulation and optimisation of dielectric haloscopes.
      PubDate: 2022-11-10
      DOI: 10.1007/s41781-022-00091-5
       
  • When, Where, and How to Open Data: a Personal Perspective

    • Free pre-print version: Loading...

      Abstract: Abstract This is a personal perspective on data sharing in the context of public data releases suitable for generic analysis. These open data can be a powerful tool for expanding the science of high-energy physics, but care must be taken in when, where, and how they are utilized. I argue that data preservation even within collaborations needs additional support to maximize our science potential. Additionally, it should also be easier for non-collaboration members to engage with collaborations. Finally, I advocate that we recognize a new type of high-energy physicist: the “data physicist,” who would be optimally suited to analyze open data as well as to develop and deploy new advanced data science tools so that we can use our precious data to their fullest potential.
      PubDate: 2022-11-08
      DOI: 10.1007/s41781-022-00090-6
       
  • Analyzing WLCG File Transfer Errors Through Machine Learning

    • Free pre-print version: Loading...

      Abstract: Abstract The increasingly growing scale of modern computing infrastructures solicits more ingenious and automatic solutions to their management. Our work focuses on file transfer failures within the Worldwide Large Hadron Collider Computing Grid and proposes a pipeline to support distributed data management operations by suggesting potential issues to investigate. Specifically, we adopt an unsupervised learning approach leveraging Natural Language Processing and Machine Learning tools to automatically parse error messages and group similar failures. The results are presented in the form of a summary table containing the most common textual patterns and time evolution charts. This approach has two main advantages. First, the joint elaboration of the error string and the transfer’s source/destination enables more informative and compact troubleshooting, as opposed to inspecting each site and checking unique messages separately. As a by-product, this also reduces the number of errors to check by some orders of magnitude (from unique error strings to unique categories or patterns). Second, the time evolution plots allow operators to immediately filter out secondary issues (e.g. transient or in resolution) and focus on the most serious problems first (e.g. escalating failures). As a preliminary assessment, we compare our results with the Global Grid User Support ticketing system, showing that most of our suggestions are indeed real issues (direct association), while being able to cover 89% of reported incidents (inverse relationship).
      PubDate: 2022-10-22
      DOI: 10.1007/s41781-022-00089-z
       
  • Computational Challenges for Multi-loop Collider Phenomenology

    • Free pre-print version: Loading...

      Abstract: Abstract Precision measurements at the LHC and future colliders require theory predictions with uncertainties at the percent level for many observables. Theory uncertainties due to the perturbative truncation are particularly relevant and must be reduced to fully exploit the physics potential of collider experiments. In recent years the theoretical high energy physics community has made tremendous analytical and numerical advances to address this challenge. In this white paper, we survey state-of-the-art calculations in perturbative quantum field theory for collider phenomenology with a particular focus on the computational requirements at high perturbative orders. We show that these calculations can have specific high-performance-computing (HPC) profiles that should to be taken into account in future HPC resource planning.
      PubDate: 2022-09-10
      DOI: 10.1007/s41781-022-00088-0
       
  • Improving Robustness of Jet Tagging Algorithms with Adversarial Training

    • Free pre-print version: Loading...

      Abstract: Abstract Deep learning is a standard tool in the field of high-energy physics, facilitating considerable sensitivity enhancements for numerous analysis strategies. In particular, in identification of physics objects, such as jet flavor tagging, complex neural network architectures play a major role. However, these methods are reliant on accurate simulations. Mismodeling can lead to non-negligible differences in performance in data that need to be measured and calibrated against. We investigate the classifier response to input data with injected mismodelings and probe the vulnerability of flavor tagging algorithms via application of adversarial attacks. Subsequently, we present an adversarial training strategy that mitigates the impact of such simulated attacks and improves the classifier robustness. We examine the relationship between performance and vulnerability and show that this method constitutes a promising approach to reduce the vulnerability to poor modeling.
      PubDate: 2022-09-10
      DOI: 10.1007/s41781-022-00087-1
       
  • Constraints on Future Analysis Metadata Systems in High Energy Physics

    • Free pre-print version: Loading...

      Abstract: Abstract In high energy physics (HEP), analysis metadata comes in many forms—from theoretical cross-sections, to calibration corrections, to details about file processing. Correctly applying metadata is a crucial and often time-consuming step in an analysis, but designing analysis metadata systems has historically received little direct attention. Among other considerations, an ideal metadata tool should be easy to use by new analysers, should scale to large data volumes and diverse processing paradigms, and should enable future analysis reinterpretation. This document, which is the product of community discussions organised by the HEP Software Foundation, categorises types of metadata by scope and format and gives examples of current metadata solutions. Important design considerations for metadata systems, including sociological factors, analysis preservation efforts, and technical factors, are discussed. A list of best practices and technical requirements for future analysis metadata systems is presented. These best practices could guide the development of a future cross-experimental effort for analysis metadata tools.
      PubDate: 2022-07-27
      DOI: 10.1007/s41781-022-00086-2
       
  • Modelling Large-Scale Scientific Data Transfers

    • Free pre-print version: Loading...

      Abstract: Abstract This work focuses on the study of a recently published dataset (Bogado et al. in ATLAS Rucio transfers dataset. Zenodo, 2020.) with data that allow us to reconstruct the lifetime of file transfers in the contexts of the Worldwide LHC Computing Grid (WLCG). Several models for Rule Time To Complete (TTC) prediction are presented and evaluated. The dataset source is Rucio, an open-source software framework that provides scientific collaborations with the functionality to organize, manage, and access their data at scale. The rich amount of data gathered about the transfers and rules, presents a unique opportunity to better understand the complex mechanisms involved in file transfers across the WLCG.
      PubDate: 2022-07-06
      DOI: 10.1007/s41781-022-00084-4
       
  • Future Trends in Nuclear Physics Computing

    • Free pre-print version: Loading...

      PubDate: 2022-06-22
      DOI: 10.1007/s41781-022-00085-3
       
  • Advances in Computing in High Energy and Nuclear Physics—Invited
           Papers from vCHEP 2021

    • Free pre-print version: Loading...

      PubDate: 2022-05-10
      DOI: 10.1007/s41781-022-00083-5
       
  • Shared Data and Algorithms for Deep Learning in Fundamental Physics

    • Free pre-print version: Loading...

      Abstract: Abstract We introduce a Python package that provides simple and unified access to a collection of datasets from fundamental physics research—including particle physics, astroparticle physics, and hadron- and nuclear physics—for supervised machine learning studies. The datasets contain hadronic top quarks, cosmic-ray-induced air showers, phase transitions in hadronic matter, and generator-level histories. While public datasets from multiple fundamental physics disciplines already exist, the common interface and provided reference models simplify future work on cross-disciplinary machine learning and transfer learning in fundamental physics. We discuss the design and structure and line out how additional datasets can be submitted for inclusion. As showcase application, we present a simple yet flexible graph-based neural network architecture that can easily be applied to a wide range of supervised learning tasks. We show that our approach reaches performance close to dedicated methods on all datasets. To simplify adaptation for various problems, we provide easy-to-follow instructions on how graph-based representations of data structures, relevant for fundamental physics, can be constructed and provide code implementations for several of them. Implementations are also provided for our proposed method and all reference algorithms.
      PubDate: 2022-05-03
      DOI: 10.1007/s41781-022-00082-6
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.214.184.223
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-