Subjects -> COMPUTER SCIENCE (Total: 2313 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (133 journals)
    - AUTOMATION AND ROBOTICS (116 journals)
    - CLOUD COMPUTING AND NETWORKS (75 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (12 journals)
    - COMPUTER GAMES (23 journals)
    - COMPUTER PROGRAMMING (25 journals)
    - COMPUTER SCIENCE (1305 journals)
    - COMPUTER SECURITY (59 journals)
    - DATA BASE MANAGEMENT (21 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (30 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (109 journals)
    - INTERNET (111 journals)
    - SOCIAL WEB (61 journals)
    - SOFTWARE (43 journals)
    - THEORY OF COMPUTING (10 journals)

COMPUTER SCIENCE (1305 journals)

The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
The Visual Computer
Journal Prestige (SJR): 0.401
Citation Impact (citeScore): 2
Number of Followers: 3  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1432-2315 - ISSN (Online) 0178-2789
Published by Springer-Verlag Homepage  [2468 journals]
  • Deep fake detection using an optimal deep learning model with multi head
           attention-based feature extraction scheme

    • Free pre-print version: Loading...

      Abstract: Abstract Face forgery, or deep fake, is a frequently used method to produce fake face images, network pornography, blackmail, and other illegal activities. Researchers developed several detection approaches based on the changing traces presented by deep forgery to limit the damage caused by deep fake methods. They obtain limited performance when evaluating cross-datum scenarios. This paper proposes an optimal deep learning approach with an attention-based feature learning scheme to perform DFD more accurately. The proposed system mainly comprises ‘5’ phases: face detection, preprocessing, texture feature extraction, spatial feature extraction, and classification. The face regions are initially detected from the collected data using the Viola–Jones (VJ) algorithm. Then, preprocessing is carried out, which resizes and normalizes the detected face regions to improve their quality for detection purposes. Next, texture features are learned using the Butterfly Optimized Gabor Filter to get information about the local features of objects in an image. Then, the spatial features are extracted using Residual Network-50 with Multi Head Attention (RN50MHA) to represent the data globally. Finally, classification is done using the Optimal Long Short-Term Memory (OLSTM), which classifies the data as fake or real, in which optimization of network is done using Enhanced Archimedes Optimization Algorithm. The proposed system is evaluated on four benchmark datasets such as Face Forensics +  + (FF + +), Deepfake Detection Challenge, Celebrity Deepfake (CDF), and Wild Deepfake. The experimental results show that DFD using OLSTM and RN50MHA achieves a higher inter and intra-dataset detection rate than existing state-of-the-art methods.
      PubDate: 2024-07-17
       
  • Dual-branch dilated context convolutional for table detection transformer
           in the document images

    • Free pre-print version: Loading...

      Abstract: Abstract With the increasing automation of document images like financial reports, table detection has become a critical component of document automation. It requires models to extract the position information of tables in document images without losing information. However, existing techniques still fall short in detecting certain small-sized or irregularly shaped tables. To address this issue, we propose a Transformer-based table detection model. To enhance both training efficiency and prediction performance, we employ a pretrained Transformer framework for fine-tuning to effectively capture underlying features. Additionally, we integrate a dual-branch dilated context convolutional module to further improve the detection accuracy and robustness for tables of various sizes and shapes by processing high-dimensional features. Furthermore, we integrated multiple layers of residual convolutional layers to capture and fuse features at different scales, enhancing the network’s ability to represent features in multi-scale feature fusion, thus enhancing the detection performance of the network. We used feature maps and heatmaps for visualization to verify the reliability of our method. We evaluate our method on publicly available document datasets, and the results demonstrate that our approach achieves more advanced performance in evaluation metrics such as Precision. https://github.com/GT-HZ/TD
      PubDate: 2024-07-16
       
  • Point cloud downsampling based on the transformer features

    • Free pre-print version: Loading...

      Abstract: Abstract This research study delves into the issue of downsampling 3D point clouds, which involves reducing the number of points in a point cloud while maintaining high performance for subsequent applications. Current downsampling methods often neglect the geometric relationships among points during sampling. Drawing inspiration from advancements in the vision field, this paper introduces a point-based transformer to process point clouds with inherent permutation invariance. We have developed a transformer point sampling (TPS) module that possesses characteristics such as permutation invariance, task specificity, and noise insensitivity, making it an ideal solution for point cloud sampling. Experimental results demonstrate that TPS is effective in downsampling point clouds while capturing more detailed information, resulting in significant improvements for segmentation tasks.
      PubDate: 2024-07-16
       
  • 3D-Scene-Former: 3D scene generation from a single RGB image using
           Transformers

    • Free pre-print version: Loading...

      Abstract: Abstract 3D scene generation requires complex hardware setups, such as multiple cameras and depth sensors. To address this challenge, there is a need for generating 3D scenes from a single RGB image by understanding the spatio-contextual information inside a scene. However, generating 3D scenes from a single RGB image represents a formidable undertaking as the depth information is missing. Moreover, we need to generate the scene from various angles and positions, which necessitates extrapolations from the limited information in a single image. Current state-of-the-art techniques hinge on extracting global and local features from the 2D scene and employ a combined estimation strategy to tackle this challenge. However, existing approaches still grapple with accurately estimating 3D parameters, especially due to the strong occlusions in cluttered environments. In this paper, we propose 3D-Scene-Former, a novel solution to generate 3D indoor scenes from a single RGB image and refine the initial estimations using a Transformer network. We evaluated our approach on two well-known datasets benchmarking it against state-of-the-art solutions. Our method outperforms the state-of-the-art in terms of 3D object detection and 3D pose estimation by a margin of 11.37%. 3D-Scene-Former opens new venues for 3D content creation, transforming a single RGB image into realistic 3D scenes through the use of interconnected mesh structures.
      PubDate: 2024-07-15
       
  • Learning to sculpt neural cityscapes

    • Free pre-print version: Loading...

      Abstract: Abstract We introduce a system that learns to sculpt 3D models of massive urban environments. The majority of humans live their lives in urban environments, using detailed virtual models for applications as diverse as virtual worlds, special effects, and urban planning. Generating such 3D models from exemplars manually is time-consuming, while 3D deep learning approaches have high memory costs. In this paper, we present a technique for training 2D neural networks to repeatedly sculpt a plane into a large-scale 3D urban environment. An initial coarse depth map is created by a GAN model, from which we refine 3D normal and depth using an image translation network regularized by a linear system. The networks are trained using real-world data to allow generative synthesis of meshes at scale. We exploit sculpting from multiple viewpoints to generate a highly detailed, concave, and water-tight 3D mesh. We show cityscapes at scales of \(100 \times 1600\) meters with more than 2 million triangles, and demonstrate that our results are objectively and subjectively similar to our exemplars.
      PubDate: 2024-07-12
       
  • ACL-SAR: model agnostic adversarial contrastive learning for robust
           skeleton-based action recognition

    • Free pre-print version: Loading...

      Abstract: Abstract Human skeleton data have been widely explored in action recognition and the human–computer interface recently, thanks to off-the-shelf motion sensors and cameras. With the widespread usage of deep models on human skeleton data, their vulnerabilities under adversarial attacks have raised increasing security concerns. Although there are several works focusing on attack strategies, fewer efforts are put into defense against adversaries in skeleton-based action recognition, which is nontrivial. In addition, labels required in adversarial learning are another pain in adversarial training-based defense. This paper proposes a robust model agnostic adversarial contrastive learning framework for this task. First, we introduce an adversarial contrastive learning framework for skeleton-based action recognition (ACL-SAR). Second, the nature of cross-view skeleton data enables cross-view adversarial contrastive learning (CV-ACL-SAR) as a further improvement. Third, adversarial attack and defense strategies are investigated, including alternate instance-wise attacks and options in adversarial training. To validate the effectiveness of our method, we conducted extensive experiments on the NTU-RGB+D and HDM05 datasets. The results show that our defense strategies are not only robust to various adversarial attacks but can also maintain generalization.
      PubDate: 2024-07-11
       
  • Autocleandeepfood: auto-cleaning and data balancing transfer learning for
           regional gastronomy food computing

    • Free pre-print version: Loading...

      Abstract: Abstract Food computing has emerged as a promising research field, employing artificial intelligence, deep learning, and data science methodologies to enhance various stages of food production pipelines. To this end, the food computing community has compiled a variety of data sets and developed various deep-learning architectures to perform automatic classification. However, automated food classification presents a significant challenge, particularly when it comes to local and regional cuisines, which are often underrepresented in available public-domain data sets. Nevertheless, obtaining high-quality, well-labeled, and well-balanced real-world labeled images is challenging since manual data curation requires significant human effort and is time-consuming. In contrast, the web has a potentially unlimited source of food data but tapping into this resource has a good chance of corrupted and wrongly labeled images. In addition, the uneven distribution among food categories may lead to data imbalance problems. All these issues make it challenging to create clean data sets for food from web data. To address this issue, we present AutoCleanDeepFood, a novel end-to-end food computing framework for regional gastronomy that contains the following components: (i) a fully automated pre-processing pipeline for custom data sets creation related to specific regional gastronomy, (ii) a transfer learning-based training paradigm to filter out noisy labels through loss ranking, incorporating a Russian Roulette probabilistic approach to mitigate data imbalance problems, and (iii) a method for deploying the resulting model on smartphones for real-time inferences. We assess the performance of our framework on a real-world noisy public domain data set, ETH Food-101, and two novel web-collected datasets, MENA-150 and Pizza-Styles. We demonstrate the filtering capabilities of our proposed method through embedding visualization of the feature space using the t-SNE dimension reduction scheme. Our filtering scheme is efficient and effectively improves accuracy in all cases, boosting performance by 0.96, 0.71, and 1.29% on MENA-150, ETH Food-101, and Pizza-Styles, respectively.
      PubDate: 2024-07-09
       
  • Correction: DC-PSENet: a novel scene text detection method integrating
           double ResNet-based and changed channels recursive feature pyramid

    • Free pre-print version: Loading...

      PubDate: 2024-07-06
       
  • Correction: Digital human and embodied intelligence for sports science:
           advancements, opportunities and prospects

    • Free pre-print version: Loading...

      PubDate: 2024-07-05
       
  • Robust consistency learning for facial expression recognition under label
           noise

    • Free pre-print version: Loading...

      Abstract: Abstract Label noise is inevitable in facial expression recognition (FER) datasets, especially for datasets that collected by web crawling, crowd sourcing in in-the-wild scenarios, which makes FER task more challenging. Recent advances tackle label noise by leveraging sample selection or constructing label distribution. However, they rely heavily on labels, which can result in confirmation bias issues. In this paper, we present RCL-Net, a simple yet effective robust consistency learning network, which combats label noise by learning robust representations and robust losses. RCL-Net can efficiently tackle facial samples with noisy labels commonly found in real-world datasets. Specifically, we first use a two-view-based backbone to embed facial images into high- and low-dimensional subspaces and then regularize the geometric structure of the high- and low-dimensional subspaces using an unsupervised dual-consistency learning strategy. Benefiting from the unsupervised dual-consistency learning strategy, we can obtain robust representations to combat label noise. Further, we impose a robust consistency regularization technique on the predictions of the classifiers to improve the whole network’s robustness. Comprehensive evaluations on three popular real-world FER datasets demonstrate that RCL-Net can effectively mitigate the impact of label noise, which significantly outperforms state-of-the-art noisy label FER methods. RCL-Net also shows better generalization capability to other tasks like CIFAR100 and Tiny-ImageNet. Our code and models will be available at this https https://github.com/myt889/RCL-Net.
      PubDate: 2024-07-05
       
  • MFDNet: Multi-Frequency Deflare Network for efficient nighttime flare
           removal

    • Free pre-print version: Loading...

      Abstract: Abstract When light is scattered or reflected accidentally in the lens, flare artifacts may appear in the captured photographs, affecting the photographs’ visual quality. The main challenge in flare removal is to eliminate various flare artifacts while preserving the original content of the image. To address this challenge, we propose a lightweight Multi-Frequency Deflare Network (MFDNet) based on the Laplacian Pyramid. Our network decomposes the flare-corrupted image into low- and high-frequency bands, effectively separating the illumination and content information in the image. The low-frequency part typically contains illumination information, while the high-frequency part contains detailed content information. So our MFDNet consists of two main modules: the Low-Frequency Flare Perception Module (LFFPM) to remove flare in the low-frequency part and the Hierarchical Fusion Reconstruction Module (HFRM) to reconstruct the flare-free image. Specifically, to perceive flare from a global perspective while retaining detailed information for image restoration, LFFPM utilizes Transformer to extract global information while utilizing a convolutional neural network to capture detailed local features. Then HFRM gradually fuses the outputs of LFFPM with the high-frequency component of the image through feature aggregation. Moreover, our MFDNet can reduce the computational cost by processing in multiple frequency bands instead of directly removing the flare on the input image. Experimental results demonstrate that our approach outperforms state-of-the-art methods in removing nighttime flare on real-world and synthetic images from the Flare7K dataset. Furthermore, the computational complexity of our model is remarkably low.
      PubDate: 2024-07-04
       
  • A deep dive into enhancing sharing of naturalistic driving data through
           face deidentification

    • Free pre-print version: Loading...

      Abstract: Abstract Human factors research in transportation relies on naturalistic driving studies (NDS) which collect real-world data from drivers on actual roads. NDS data offer valuable insights into driving behavior, styles, habits, and safety-critical events. However, these data often contain personally identifiable information (PII), such as driver face videos, which cannot be publicly shared due to privacy concerns. To address this, our paper introduces a comprehensive framework for deidentifying drivers’ face videos, that can facilitate the wide sharing of driver face videos while protecting PII. Leveraging recent advancements in generative adversarial networks (GANs), we explore the efficacy of different face swapping algorithms in preserving essential human factors attributes while anonymizing participants’ identities. Most face swapping algorithms are tested in restricted lighting conditions and indoor settings, there is no known study that tested them in adverse and natural situations. We conducted extensive experiments using large-scale outdoor NDS data, evaluating the quantification of errors associated with head, mouth, and eye movements, along with other attributes important for human factors research. Additionally, we performed qualitative assessments of these methods through human evaluators providing valuable insights into the quality and fidelity of the deidentified videos. We propose the utilization of synthetic faces as substitutes for real faces to enhance generalization. Additionally, we created practical guidelines for video deidentification, emphasizing error threshold creation, spot-checking for abrupt metric changes, and mitigation strategies for reidentification risks. Our findings underscore nuanced challenges in balancing data utility and privacy, offering valuable insights into enhancing face video deidentification techniques in NDS scenarios.
      PubDate: 2024-07-04
       
  • Real-time salient object detection based on accuracy background and
           salient path source selection

    • Free pre-print version: Loading...

      Abstract: Abstract Boundary and connectivity prior are common methods for detecting the image salient object. They often address two problems: 1) if the salient object touches the image boundary, the saliency of the object will fail, and 2) accurate pixel-wise or superpixel-wise computation needs high time expenditure. This study proposes a block-wise algorithm to reduce calculation time expenditure and suppress the salient objects touching the image boundary. The algorithm consists of four stages. In the first stage, each block is analyzed by an adaptive micro and macro prediction technique to generate a saliency prediction map. The second stage selects background and salient sources from the saliency prediction map. Background sources are extracted from the image boundary with low saliency value. Salient sources are accurately positioned in the region of salient objects. In the third stage, the background and salient sources are used to generate the background path and salient path based on minimum barrier distance. The block-wise initial saliency map is obtained by fusing the background and salient paths. In the fourth stage, major-color modeling technology and visual focus priors are used to complete the refinement of the saliency map to improve the block effect. In the experimental result, the proposed method produced the best test results among other algorithms in three dataset tests and achieved 284 frames per second (FPS) speed performance on the MSRA-10 K dataset. Our method shows at least 29.09% speed improvement and executes in real-time on a lightweight embedded platform.
      PubDate: 2024-07-03
       
  • Deep attentive multimodal learning for food information enhancement via
           early-stage heterogeneous fusion

    • Free pre-print version: Loading...

      Abstract: In contrast to single-modal content, multimodal data can offer greater insight into food statistics more vividly and effectively. But traditional food classification system focuses on individual modality. It is thus futile as the massive amount of data are emerging on a daily basis which has latterly attracted researchers in this field. Moreover, there are very few available multimodal Indian food datasets. On studying these findings, we build a novel multimodal food analysis model based on deep attentive multimodal fusion network (DAMFN) for lingual and visual integration. The model includes three stages: functional feature extraction, early-stage fusion and feature classification. In functional feature extraction, deep features from the individual modalities are abstracted. Then an early-stage fusion is applied that leverages the deep correlation between the modalities. Lastly, the fused features are provided to the classification system for the final decision in the feature classification phase. We further developed a dataset having Indian food images with their related caption for the experimental purpose. In addition to this, the proposed approach is also evaluated on a large-scale dataset called UPMC Food 101, having 90,704 instances. The experimental results demonstrate that the proposed DAMFN outperforms several state-of-the-art techniques of multimodal food classification methods as well as the individual modality systems.
      PubDate: 2024-07-03
       
  • Transmission-guided multi-feature fusion Dehaze network

    • Free pre-print version: Loading...

      Abstract: Abstract Image dehazing is an important direction of low-level visual tasks, and its quality and efficiency directly affect the quality of high-level visual tasks. Therefore, how to quickly and efficiently process hazy images with different thicknesses of fog has become the focus of research. This paper presents a multi-feature fusion embedded image dehazing network based on transmission guidance. Firstly, we propose a transmission graph-guided feature fusion enhanced coding network, which can combine different weight information and show better flexibility for different dehazing information. At the same time, in order to keep more detailed information in the reconstructed image, we propose a decoder network embedded with Mix module, which can not only keep shallow information, but also allow the network to learn the weights of different depth information spontaneously and re-fit the dehazing features. The comparative experiments on RESIDE and Haze4K datasets verify the efficiency and high quality of our algorithm. A series of ablation experiments show that Multi-weight attention feature fusion module (WA) module and Mix module can effectively improve the model performance. The code is released in https://doi.org/10.5281/zenodo.10836919.
      PubDate: 2024-07-03
       
  • Generative adversarial networks for handwriting image generation: a review

    • Free pre-print version: Loading...

      Abstract: Abstract Handwriting synthesis, the task of automatically generating realistic images of handwritten text, has gained increasing attention in recent years, both as a challenge in itself, as well as a task that supports handwriting recognition research. The latter task is to synthesize large image datasets that can then be used to train deep learning models to recognize handwritten text without the need for human-provided annotations. While early attempts at developing handwriting generators yielded limited results [1], more recent works involving generative models of deep neural network architectures have been shown able to produce realistic imitations of human handwriting [2–19]. In this review, we focus on one of the most prevalent and successful architectures in the field of handwriting synthesis, the generative adversarial network (GAN). We describe the capabilities, architecture specifics, and performance of the GAN-based models that have been introduced to the literature since 2019 [2–14]. These models can generate random handwriting styles, imitate reference styles, and produce realistic images of arbitrary text that was not in the training lexicon. The generated images have been shown to contribute to improving handwriting recognition results when augmenting the training samples of recognition models with synthetic images. The synthetic images were often hard to expose as non-real, even by human examiners, but also could be implausible or style-limited. The review includes a discussion of the characteristics of the GAN architecture in comparison with other paradigms in the image-generation domain and highlights the remaining challenges for handwriting synthesis.
      PubDate: 2024-07-02
       
  • Exploring high-quality image deraining Transformer via effective large
           kernel attention

    • Free pre-print version: Loading...

      Abstract: Abstract In recent years, Transformer has demonstrated significant performance in single image deraining tasks. However, the standard self-attention in the Transformer makes it difficult to model local features of images effectively. To alleviate the above problem, this paper proposes a high-quality deraining Transformer with effective large kernel attention, named as ELKAformer. The network employs the Transformer-Style Effective Large Kernel Conv-Block (ELKB), which contains 3 key designs: Large Kernel Attention Block (LKAB), Dynamical Enhancement Feed-forward Network (DEFN), and Edge Squeeze Recovery Block (ESRB) to guide the extraction of rich features. To be specific, LKAB introduces convolutional modulation to substitute vanilla self-attention and achieve better local representations. The designed DEFN refines the most valuable attention values in LKAB, allowing the overall design to better preserve pixel-wise information. Additionally, we develop ESRB to obtain long-range dependencies of different positional information. Massive experimental results demonstrate that this method achieves favorable effects while effectively saving computational costs. Our code is available at github
      PubDate: 2024-07-02
       
  • Effective multi-scale enhancement fusion method for low-light images based
           on interest-area perception OCTM and “pixel healthiness” evaluation

    • Free pre-print version: Loading...

      Abstract: Abstract Low-light images suffer from low contrast and low dynamic range. However, most existing single-frame low-light image enhancement algorithms are not good enough in terms of detail preservation and color expression and often have high algorithmic complexity. In this paper, we propose a single-frame low-light image fusion enhancement algorithm based on multi-scale contrast–tone mapping and "pixel healthiness" evaluation. It can adaptively adjust the exposure level of each region according to the principal component in the image and enhance contrast while preserving color and detail expression with low computational complexity. In particular, to find the most appropriate size of the artificial image sequence and the target enhancement range for each image, we propose a multi-scale parameter determination method based on the principal component analysis of the V-channel histogram to obtain the best enhancement while reducing unnecessary computations. In addition, a new "pixel healthiness" evaluation method based on global illuminance and local contrast is proposed for fast and efficient computation of weights for image fusion. Subjective evaluation and objective metrics show that our algorithm performs better than existing single-frame image algorithms and other fusion-based algorithms in enhancement, contrast, color expression, and detail preservation.
      PubDate: 2024-07-02
       
  • Few-shot anime pose transfer

    • Free pre-print version: Loading...

      Abstract: Abstract In this paper, we propose a few-shot method for pose transfer of anime characters—given a source image of an anime character and a target pose, we transfer the pose of the target to the source character. Despite recent advances in pose transfer on real people images, these methods typically require large numbers of training images of different person under different poses to achieve reasonable results. However, anime character images are expensive to obtain they are created with a lot of artistic authoring. To address this, we propose a meta-learning framework for few-shot pose transfer, which can well generalize to an unseen character given just a few examples of the character. Further, we propose fusion residual blocks to align the features of the source and target so that the appearance of the source character can be well transferred to the target pose. Experiments show that our method outperforms leading pose transfer methods, especially when the source characters are not in the training set.
      PubDate: 2024-07-01
       
  • Preface the visual computer (vol 340 issue 07)

    • Free pre-print version: Loading...

      PubDate: 2024-06-26
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.233.232.160
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 
  Subjects -> COMPUTER SCIENCE (Total: 2313 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (133 journals)
    - AUTOMATION AND ROBOTICS (116 journals)
    - CLOUD COMPUTING AND NETWORKS (75 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (12 journals)
    - COMPUTER GAMES (23 journals)
    - COMPUTER PROGRAMMING (25 journals)
    - COMPUTER SCIENCE (1305 journals)
    - COMPUTER SECURITY (59 journals)
    - DATA BASE MANAGEMENT (21 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (30 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (109 journals)
    - INTERNET (111 journals)
    - SOCIAL WEB (61 journals)
    - SOFTWARE (43 journals)
    - THEORY OF COMPUTING (10 journals)

COMPUTER SCIENCE (1305 journals)

The end of the list has been reached or no journals were found for your choice.
Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.233.232.160
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-