Subjects -> COMPUTER SCIENCE (Total: 2313 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (133 journals)
    - AUTOMATION AND ROBOTICS (116 journals)
    - CLOUD COMPUTING AND NETWORKS (75 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (12 journals)
    - COMPUTER GAMES (23 journals)
    - COMPUTER PROGRAMMING (25 journals)
    - COMPUTER SCIENCE (1305 journals)
    - COMPUTER SECURITY (59 journals)
    - DATA BASE MANAGEMENT (21 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (30 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (109 journals)
    - INTERNET (111 journals)
    - SOCIAL WEB (61 journals)
    - SOFTWARE (43 journals)
    - THEORY OF COMPUTING (10 journals)

SOFTWARE (43 journals)

Showing 1 - 34 of 34 Journals sorted alphabetically
ACM Transactions on Mathematical Software (TOMS)     Hybrid Journal   (Followers: 6)
Computing and Software for Big Science     Hybrid Journal   (Followers: 1)
IEEE Software     Full-text available via subscription   (Followers: 243)
International Free and Open Source Software Law Review     Open Access   (Followers: 6)
International Journal of Advanced Network, Monitoring and Controls     Open Access  
International Journal of Agile and Extreme Software Development     Hybrid Journal   (Followers: 5)
International Journal of Computer Vision and Image Processing     Full-text available via subscription   (Followers: 15)
International Journal of Forensic Software Engineering     Hybrid Journal  
International Journal of Open Source Software and Processes     Full-text available via subscription   (Followers: 3)
International Journal of People-Oriented Programming     Full-text available via subscription  
International Journal of Secure Software Engineering     Full-text available via subscription   (Followers: 6)
International Journal of Soft Computing and Software Engineering     Open Access   (Followers: 14)
International Journal of Software Engineering, Technology and Applications     Hybrid Journal   (Followers: 4)
International Journal of Software Innovation     Full-text available via subscription   (Followers: 1)
International Journal of Software Science and Computational Intelligence     Full-text available via subscription   (Followers: 1)
International Journal of Systems and Software Security and Protection     Hybrid Journal   (Followers: 2)
International Journal of Web Portals     Full-text available via subscription   (Followers: 17)
International Journal of Web Services Research     Full-text available via subscription  
Journal of Communications Software and Systems     Open Access   (Followers: 1)
Journal of Database Management     Full-text available via subscription   (Followers: 8)
Journal of Information Systems Engineering and Business Intelligence     Open Access  
Journal of Information Technology     Hybrid Journal   (Followers: 56)
Journal of Software Engineering and Applications     Open Access   (Followers: 12)
Journal of Software Engineering Research and Development     Open Access   (Followers: 10)
Python Papers     Open Access   (Followers: 11)
Python Papers Monograph     Open Access   (Followers: 4)
Scientific Phone Apps and Mobile Devices     Open Access  
SIGLOG news     Full-text available via subscription   (Followers: 1)
Software Engineering     Open Access   (Followers: 33)
Software Engineering     Full-text available via subscription   (Followers: 6)
Software Impacts     Open Access   (Followers: 1)
SoftwareX     Open Access   (Followers: 1)
Transactions on Software Engineering and Methodology     Full-text available via subscription   (Followers: 7)
VFAST Transactions on Software Engineering     Open Access   (Followers: 2)
Similar Journals
Journal Cover
Computing and Software for Big Science
Number of Followers: 1  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 2510-2036 - ISSN (Online) 2510-2044
Published by Springer-Verlag Homepage  [2468 journals]
  • Optimizing High-Throughput Inference on Graph Neural Networks at Shared
           Computing Facilities with the NVIDIA Triton Inference Server

    • Free pre-print version: Loading...

      Abstract: Abstract With machine learning applications now spanning a variety of computational tasks, multi-user shared computing facilities are devoting a rapidly increasing proportion of their resources to such algorithms. Graph neural networks (GNNs), for example, have provided astounding improvements in extracting complex signatures from data and are now widely used in a variety of applications, such as particle jet classification in high energy physics (HEP). However, GNNs also come with an enormous computational penalty that requires the use of GPUs to maintain reasonable throughput. At shared computing facilities, such as those used by physicists at Fermi National Accelerator Laboratory (Fermilab), methodical resource allocation and high throughput at the many-user scale are key to ensuring that resources are being used as efficiently as possible. These facilities, however, primarily provide CPU-only nodes, which proves detrimental to time-to-insight and computational throughput for workflows that include machine learning inference. In this work, we describe how a shared computing facility can use the NVIDIA Triton Inference Server to optimize its resource allocation and computing structure, recovering high throughput while scaling out to multiple users by massively parallelizing their machine learning inference. To demonstrate the effectiveness of this system in a realistic multi-user environment, we use the Fermilab Elastic Analysis Facility augmented with the Triton Inference Server to provide scalable and high-throughput access to a HEP-specific GNN and report on the outcome.
      PubDate: 2024-07-18
       
  • A Case Study of Sending Graph Neural Networks Back to the Test Bench for
           Applications in High-Energy Particle Physics

    • Free pre-print version: Loading...

      Abstract: Abstract In high-energy particle collisions, the primary collision products usually decay further resulting in tree-like, hierarchical structures with a priori unknown multiplicity. At the stable-particle level all decay products of a collision form permutation invariant sets of final state objects. The analogy to mathematical graphs gives rise to the idea that graph neural networks (GNNs), which naturally resemble these properties, should be best-suited to address many tasks related to high-energy particle physics. In this paper we describe a benchmark test of a typical GNN against neural networks of the well-established deep fully connected feed-forward architecture. We aim at performing this comparison maximally unbiased in terms of nodes, hidden layers, or trainable parameters of the neural networks under study. As physics case we use the classification of the final state \(\text{X}\) produced in association with top quark–antiquark pairs in proton–proton collisions at the Large Hadron Collider at CERN, where \(\text{X}\) stands for a bottom quark–antiquark pair produced either non-resonantly or through the decay of an intermediately produced \(\text{Z}\) or Higgs boson.
      PubDate: 2024-07-12
       
  • Soft Margin Spectral Normalization for GANs

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, we explore the use of Generative Adversarial Networks (GANs) to speed up the simulation process while ensuring that the generated results are consistent in terms of physics metrics. Our main focus is the application of spectral normalization for GANs to generate electromagnetic calorimeter (ECAL) response data, which is a crucial component of the LHCb. We propose an approach that allows to balance between model’s capacity and stability during training procedure, compare it with previously published ones and study the relationship between proposed method’s hyperparameters and quality of generated objects. We show that the tuning of normalization method’s hyperparameters boosts the quality of generative model.
      PubDate: 2024-07-02
       
  • Autoencoder-Based Anomaly Detection System for Online Data Quality
           Monitoring of the CMS Electromagnetic Calorimeter

    • Free pre-print version: Loading...

      Abstract: Abstract The CMS detector is a general-purpose apparatus that detects high-energy collisions produced at the LHC. Online data quality monitoring of the CMS electromagnetic calorimeter is a vital operational tool that allows detector experts to quickly identify, localize, and diagnose a broad range of detector issues that could affect the quality of physics data. A real-time autoencoder-based anomaly detection system using semi-supervised machine learning is presented enabling the detection of anomalies in the CMS electromagnetic calorimeter data. A novel method is introduced which maximizes the anomaly detection performance by exploiting the time-dependent evolution of anomalies as well as spatial variations in the detector response. The autoencoder-based system is able to efficiently detect anomalies, while maintaining a very low false discovery rate. The performance of the system is validated with anomalies found in 2018 and 2022 LHC collision data. In addition, the first results from deploying the autoencoder-based system in the CMS online data quality monitoring workflow during the beginning of Run 3 of the LHC are presented, showing its ability to detect issues missed by the existing system.
      PubDate: 2024-06-24
       
  • Optimal Operation of Cryogenic Calorimeters Through Deep Reinforcement
           Learning

    • Free pre-print version: Loading...

      Abstract: Abstract Cryogenic phonon detectors with transition-edge sensors achieve the best sensitivity to sub-GeV/c \(^2\) dark matter interactions with nuclei in current direct detection experiments. In such devices, the temperature of the thermometer and the bias current in its readout circuit need careful optimization to achieve optimal detector performance. This task is not trivial and is typically done manually by an expert. In our work, we automated the procedure with reinforcement learning in two settings. First, we trained on a simulation of the response of three Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) detectors used as a virtual reinforcement learning environment. Second, we trained live on the same detectors operated in the CRESST underground setup. In both cases, we were able to optimize a standard detector as fast and with comparable results as human experts. Our method enables the tuning of large-scale cryogenic detector setups with minimal manual interventions.
      PubDate: 2024-05-22
      DOI: 10.1007/s41781-024-00119-y
       
  • Software Performance of the ATLAS Track Reconstruction for LHC Run 3

    • Free pre-print version: Loading...

      Abstract: Abstract Charged particle reconstruction in the presence of many simultaneous proton–proton ( \(p {} p \) ) collisions in the LHC is a challenging task for the ATLAS experiment’s reconstruction software due to the combinatorial complexity. This paper describes the major changes made to adapt the software to reconstruct high-activity collisions with an average of 50 or more simultaneous \(p {} p \) interactions per bunch crossing (pile-up) promptly using the available computing resources. The performance of the key components of the track reconstruction chain and its dependence on pile-up are evaluated, and the improvement achieved compared to the previous software version is quantified. For events with an average of \(60~p {} p \) collisions per bunch crossing, the updated track reconstruction is twice as fast as the previous version, without significant reduction in reconstruction efficiency and while reducing the rate of combinatorial fake tracks by more than a factor two.
      PubDate: 2024-04-02
      DOI: 10.1007/s41781-023-00111-y
       
  • Real-Time Graph Building on FPGAs for Machine Learning Trigger
           Applications in Particle Physics

    • Free pre-print version: Loading...

      Abstract: Abstract We present a design methodology that enables the semi-automatic generation of a hardware-accelerated graph building architectures for locally constrained graphs based on formally described detector definitions. In addition, we define a similarity measure in order to compare our locally constrained graph building approaches with commonly used k-nearest neighbour building approaches. To demonstrate the feasibility of our solution for particle physics applications, we implemented a real-time graph building approach in a case study for the Belle II central drift chamber using Field-Programmable Gate Arrays (FPGAs). Our presented solution adheres to all throughput and latency constraints currently present in the hardware-based trigger of the Belle II experiment. We achieve constant time complexity at the expense of linear space complexity and thus prove that our automated methodology generates online graph building designs suitable for a wide range of particle physics applications. By enabling an hardware-accelerated preprocessing of graphs, we enable the deployment of novel Graph Neural Networks (GNNs) in first-level triggers of particle physics experiments.
      PubDate: 2024-03-21
      DOI: 10.1007/s41781-024-00117-0
       
  • Deep Generative Models for Fast Photon Shower Simulation in ATLAS

    • Free pre-print version: Loading...

      Abstract: Abstract The need for large-scale production of highly accurate simulated event samples for the extensive physics programme of the ATLAS experiment at the Large Hadron Collider motivates the development of new simulation techniques. Building on the recent success of deep learning algorithms, variational autoencoders and generative adversarial networks are investigated for modelling the response of the central region of the ATLAS electromagnetic calorimeter to photons of various energies. The properties of synthesised showers are compared with showers from a full detector simulation using geant4. Both variational autoencoders and generative adversarial networks are capable of quickly simulating electromagnetic showers with correct total energies and stochasticity, though the modelling of some shower shape distributions requires more refinement. This feasibility study demonstrates the potential of using such algorithms for ATLAS fast calorimeter simulation in the future and shows a possible way to complement current simulation techniques.
      PubDate: 2024-03-05
      DOI: 10.1007/s41781-023-00106-9
       
  • FunTuple: A New N-tuple Component for Offline Data Processing at the LHCb
           Experiment

    • Free pre-print version: Loading...

      Abstract: Abstract The offline software framework of the LHCb experiment has undergone a significant overhaul to tackle the data processing challenges that will arise in the upcoming Run 3 and Run 4 of the Large Hadron Collider. This paper introduces FunTuple, a novel component developed for offline data processing within the LHCb experiment. This component enables the computation and storage of a diverse range of observables for both reconstructed and simulated events by leveraging on the tools initially developed for the trigger system. This feature is crucial for ensuring consistency between trigger-computed and offline-analysed observables. The component and its tool suite offer users flexibility to customise stored observables, and its reliability is validated through a full-coverage set of rigorous unit tests. This paper comprehensively explores FunTuple’s design, interface, interaction with other algorithms, and its role in facilitating offline data processing for the LHCb experiment for the next decade and beyond.
      PubDate: 2024-02-24
      DOI: 10.1007/s41781-024-00116-1
       
  • Artificial Intelligence for the Electron Ion Collider (AI4EIC)

    • Free pre-print version: Loading...

      Abstract: Abstract The Electron-Ion Collider (EIC), a state-of-the-art facility for studying the strong force, is expected to begin commissioning its first experiments in 2028. This is an opportune time for artificial intelligence (AI) to be included from the start at this facility and in all phases that lead up to the experiments. The second annual workshop organized by the AI4EIC working group, which recently took place, centered on exploring all current and prospective application areas of AI for the EIC. This workshop is not only beneficial for the EIC, but also provides valuable insights for the newly established ePIC collaboration at EIC. This paper summarizes the different activities and R&D projects covered across the sessions of the workshop and provides an overview of the goals, approaches and strategies regarding AI/ML in the EIC community, as well as cutting-edge techniques currently studied in other experiments.
      PubDate: 2024-02-15
      DOI: 10.1007/s41781-024-00113-4
       
  • PanDA: Production and Distributed Analysis System

    • Free pre-print version: Loading...

      Abstract: Abstract The Production and Distributed Analysis (PanDA) system is a data-driven workload management system engineered to operate at the LHC data processing scale. The PanDA system provides a solution for scientific experiments to fully leverage their distributed heterogeneous resources, showcasing scalability, usability, flexibility, and robustness. The system has successfully proven itself through nearly two decades of steady operation in the ATLAS experiment, addressing the intricate requirements such as diverse resources distributed worldwide at about 200 sites, thousands of scientists analyzing the data remotely, the volume of processed data beyond the exabyte scale, dozens of scientific applications to support, and data processing over several billion hours of computing usage per year. PanDA’s flexibility and scalability make it suitable for the High Energy Physics community and wider science domains at the Exascale. Beyond High Energy Physics, PanDA’s relevance extends to other big data sciences, as evidenced by its adoption in the Vera C. Rubin Observatory and the sPHENIX experiment. As the significance of advanced workflows continues to grow, PanDA has transformed into a comprehensive ecosystem, effectively tackling challenges associated with emerging workflows and evolving computing technologies. The paper discusses PanDA’s prominent role in the scientific landscape, detailing its architecture, functionality, deployment strategies, project management approaches, results, and evolution into an ecosystem.
      PubDate: 2024-01-23
      DOI: 10.1007/s41781-024-00114-3
       
  • KinFit: A Kinematic Fitting Package for Hadron Physics Experiments

    • Free pre-print version: Loading...

      Abstract: Abstract A kinematic fitting package, KinFit, based on the Lagrange multiplier technique has been implemented for generic hadron physics experiments. It is particularly suitable for experiments where the interaction point is unknown, such as experiments with extended target volumes. The KinFit package includes vertex finding tools and fitting with kinematic constraints, such as mass hypothesis and four-momentum conservation, as well as combinations of these constraints. The new package is distributed as an open source software via GitHub. This paper presents a comprehensive description of the KinFit package and its features, as well as a benchmark study using Monte Carlo simulations of the \(pp\rightarrow pK^+\Lambda \rightarrow pK^+p\pi ^-\) reaction. The results show that KinFit improves the parameter resolution and provides an excellent basis for event selection.
      PubDate: 2024-01-07
      DOI: 10.1007/s41781-023-00112-x
       
  • Fast Simulation for the Super Charm-Tau Factory Detector

    • Free pre-print version: Loading...

      Abstract: Abstract The Super Charm-Tau factory (high luminosity electron-positron collider with 3–7 GeV center of mass energy range) experiment project is under development by the consortium of Russian scientific and education organizations. The article describes the present status of the Super Charm-Tau detector fast simulation and the algorithms on which it is based, example usage and demonstration of fast simulation results.
      PubDate: 2024-01-02
      DOI: 10.1007/s41781-023-00108-7
       
  • A Flexible and Efficient Approach to Missing Transverse Momentum
           Reconstruction

    • Free pre-print version: Loading...

      Abstract: Abstract Missing transverse momentum is a crucial observable for physics at hadron colliders, being the only constraint on the kinematics of “invisible” objects such as neutrinos and hypothetical dark matter particles. Computing missing transverse momentum at the highest possible precision, particularly in experiments at the energy frontier, can be a challenging procedure due to ambiguities in the distribution of energy and momentum between many reconstructed particle candidates. This paper describes a novel solution for efficiently encoding information required for the computation of missing transverse momentum given arbitrary selection criteria for the constituent reconstructed objects. Pileup suppression using information from both the calorimeter and the inner detector is an integral component of the reconstruction procedure. Energy calibration and systematic variations are naturally supported. Following this strategy, the ATLAS Collaboration has been able to optimise the use of missing transverse momentum in diverse analyses throughout Runs 2 and 3 of the Large Hadron Collider and for future analyses.
      PubDate: 2024-01-02
      DOI: 10.1007/s41781-023-00110-z
       
  • Quantum Algorithms for Charged Particle Track Reconstruction in the LUXE
           Experiment

    • Free pre-print version: Loading...

      Abstract: Abstract The LUXE experiment is a new experiment in planning in Hamburg, which will study quantum electrodynamics at the strong-field frontier. LUXE intends to measure the positron production rate in this unprecedented regime using, among others, a silicon tracking detector. The large number of expected positrons traversing the sensitive detector layers results in an extremely challenging combinatorial problem, which can become computationally expensive for classical computers. This paper investigates the potential future use of gate-based quantum computers for pattern recognition in track reconstruction. Approaches based on a quadratic unconstrained binary optimisation and a quantum graph neural network are investigated in classical simulations of quantum devices and compared with a classical track reconstruction algorithm. In addition, a proof-of-principle study is performed using quantum hardware.
      PubDate: 2023-12-18
      DOI: 10.1007/s41781-023-00109-6
       
  • Photon Reconstruction in the Belle II Calorimeter Using Graph Neural
           Networks

    • Free pre-print version: Loading...

      Abstract: Abstract We present the study of a fuzzy clustering algorithm for the Belle II electromagnetic calorimeter using Graph Neural Networks. We use a realistic detector simulation including simulated beam backgrounds and focus on the reconstruction of both isolated and overlapping photons. We find significant improvements of the energy resolution compared to the currently used reconstruction algorithm for both isolated and overlapping photons of more than 30% for photons with energies \(E_{\gamma }<0.5\,\mathrm {\,Ge\hspace{-1.00006pt}V}\) and high levels of beam backgrounds. Overall, the GNN reconstruction improves the resolution and reduces the tails of the reconstructed energy distribution and therefore is a promising option for the upcoming high luminosity running of Belle II.
      PubDate: 2023-12-15
      DOI: 10.1007/s41781-023-00105-w
       
  • GNN for Deep Full Event Interpretation and Hierarchical Reconstruction of
           Heavy-Hadron Decays in Proton–Proton Collisions

    • Free pre-print version: Loading...

      Abstract: Abstract The LHCb experiment at the Large Hadron Collider (LHC) is designed to perform high-precision measurements of heavy-hadron decays, which requires the collection of large data samples and a good understanding and suppression of multiple background sources. Both factors are challenged by a fivefold increase in the average number of proton–proton collisions per bunch crossing, corresponding to a change in the detector operation conditions for the LHCb Upgrade I phase, recently started. A further tenfold increase is expected in the Upgrade II phase, planned for the next decade. The limits in the storage capacity of the trigger will bring an inverse relationship between the number of particles selected to be stored per event and the number of events that can be recorded. In addition the background levels will rise due to the enlarged combinatorics. To tackle both challenges, we propose a novel approach, never attempted before in a hadronic collider: a Deep-learning based Full Event Interpretation (DFEI), to perform the simultaneous identification, isolation and hierarchical reconstruction of all the heavy-hadron decay chains per event. This strategy radically contrasts with the standard selection procedure used in LHCb to identify heavy-hadron decays, that looks individually at subsets of particles compatible with being products of specific decay types, disregarding the contextual information from the rest of the event. Following the DFEI approach, once the relevant particles in each event are identified, the rest can be safely removed to optimise the storage space and maximise the trigger efficiency. We present the first prototype for the DFEI algorithm, that leverages the power of Graph Neural Networks (GNN). This paper describes the design and development of the algorithm, and its performance in Upgrade I simulated conditions.
      PubDate: 2023-11-17
      DOI: 10.1007/s41781-023-00107-8
       
  • Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data
           Processing

    • Free pre-print version: Loading...

      Abstract: Abstract We study the performance of a cloud-based GPU-accelerated inference server to speed up event reconstruction in neutrino data batch jobs. Using detector data from the ProtoDUNE experiment and employing the standard DUNE grid job submission tools, we attempt to reprocess the data by running several thousand concurrent grid jobs, a rate we expect to be typical of current and future neutrino physics experiments. We process most of the dataset with the GPU version of our processing algorithm and the remainder with the CPU version for timing comparisons. We find that a 100-GPU cloud-based server is able to easily meet the processing demand, and that using the GPU version of the event processing algorithm is two times faster than processing these data with the CPU version when comparing to the newest CPUs in our sample. The amount of data transferred to the inference server during the GPU runs can overwhelm even the highest-bandwidth network switches, however, unless care is taken to observe network facility limits or otherwise distribute the jobs to multiple sites. We discuss the lessons learned from this processing campaign and several avenues for future improvements.
      PubDate: 2023-10-27
      DOI: 10.1007/s41781-023-00101-0
       
  • Potential of the Julia Programming Language for High Energy Physics
           Computing

    • Free pre-print version: Loading...

      Abstract: Abstract Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed and resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus on the application when developing the code, better research productivity pleads for a high-level programming language. A popular approach consists of combining Python, used for the high-level interface, and C++, used for the computing intensive part of the code. A more convenient and efficient approach would be to use a language that provides both high-level programming and high-performance. The Julia programming language, developed at MIT especially to allow the use of a single language in research activities, has followed this path. In this paper the applicability of using the Julia language for HEP research is explored, covering the different aspects that are important for HEP code development: runtime performance, handling of large projects, interface with legacy code, distributed computing, training, and ease of programming. The study shows that the HEP community would benefit from a large scale adoption of this programming language. The HEP-specific foundation libraries that would need to be consolidated are identified.
      PubDate: 2023-10-05
      DOI: 10.1007/s41781-023-00104-x
       
  • Jet Energy Calibration with Deep Learning as a Kubeflow Pipeline

    • Free pre-print version: Loading...

      Abstract: Abstract Precise measurements of the energy of jets emerging from particle collisions at the LHC are essential for a vast majority of physics searches at the CMS experiment. In this study, we leverage well-established deep learning models for point clouds and CMS open data to improve the energy calibration of particle jets. To enable production-ready machine learning based jet energy calibration an end-to-end pipeline is built on the Kubeflow cloud platform. The pipeline allowed us to scale up our hyperparameter tuning experiments on cloud resources, and serve optimal models as REST endpoints. We present the results of the parameter tuning process and analyze the performance of the served models in terms of inference time and overhead, providing insights for future work in this direction. The study also demonstrates improvements in both flavor dependence and resolution of the energy response when compared to the standard jet energy corrections baseline.
      PubDate: 2023-08-23
      DOI: 10.1007/s41781-023-00103-y
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.97.14.90
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 
  Subjects -> COMPUTER SCIENCE (Total: 2313 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (133 journals)
    - AUTOMATION AND ROBOTICS (116 journals)
    - CLOUD COMPUTING AND NETWORKS (75 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (12 journals)
    - COMPUTER GAMES (23 journals)
    - COMPUTER PROGRAMMING (25 journals)
    - COMPUTER SCIENCE (1305 journals)
    - COMPUTER SECURITY (59 journals)
    - DATA BASE MANAGEMENT (21 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (30 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (109 journals)
    - INTERNET (111 journals)
    - SOCIAL WEB (61 journals)
    - SOFTWARE (43 journals)
    - THEORY OF COMPUTING (10 journals)

SOFTWARE (43 journals)

Showing 1 - 34 of 34 Journals sorted alphabetically
ACM Transactions on Mathematical Software (TOMS)     Hybrid Journal   (Followers: 6)
Computing and Software for Big Science     Hybrid Journal   (Followers: 1)
IEEE Software     Full-text available via subscription   (Followers: 243)
International Free and Open Source Software Law Review     Open Access   (Followers: 6)
International Journal of Advanced Network, Monitoring and Controls     Open Access  
International Journal of Agile and Extreme Software Development     Hybrid Journal   (Followers: 5)
International Journal of Computer Vision and Image Processing     Full-text available via subscription   (Followers: 15)
International Journal of Forensic Software Engineering     Hybrid Journal  
International Journal of Open Source Software and Processes     Full-text available via subscription   (Followers: 3)
International Journal of People-Oriented Programming     Full-text available via subscription  
International Journal of Secure Software Engineering     Full-text available via subscription   (Followers: 6)
International Journal of Soft Computing and Software Engineering     Open Access   (Followers: 14)
International Journal of Software Engineering, Technology and Applications     Hybrid Journal   (Followers: 4)
International Journal of Software Innovation     Full-text available via subscription   (Followers: 1)
International Journal of Software Science and Computational Intelligence     Full-text available via subscription   (Followers: 1)
International Journal of Systems and Software Security and Protection     Hybrid Journal   (Followers: 2)
International Journal of Web Portals     Full-text available via subscription   (Followers: 17)
International Journal of Web Services Research     Full-text available via subscription  
Journal of Communications Software and Systems     Open Access   (Followers: 1)
Journal of Database Management     Full-text available via subscription   (Followers: 8)
Journal of Information Systems Engineering and Business Intelligence     Open Access  
Journal of Information Technology     Hybrid Journal   (Followers: 56)
Journal of Software Engineering and Applications     Open Access   (Followers: 12)
Journal of Software Engineering Research and Development     Open Access   (Followers: 10)
Python Papers     Open Access   (Followers: 11)
Python Papers Monograph     Open Access   (Followers: 4)
Scientific Phone Apps and Mobile Devices     Open Access  
SIGLOG news     Full-text available via subscription   (Followers: 1)
Software Engineering     Open Access   (Followers: 33)
Software Engineering     Full-text available via subscription   (Followers: 6)
Software Impacts     Open Access   (Followers: 1)
SoftwareX     Open Access   (Followers: 1)
Transactions on Software Engineering and Methodology     Full-text available via subscription   (Followers: 7)
VFAST Transactions on Software Engineering     Open Access   (Followers: 2)
Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.97.14.90
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-