for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Visual Informatics
  [0 followers]  Follow
    
  This is an Open Access Journal Open Access journal
   ISSN (Online) 2468-502X
   Published by Elsevier Homepage  [3089 journals]
  • Visual simulation of clouds

    • Authors: Yoshinori Dobashi; Kei Iwasaki; Yonghao Yue; Tomoyuki Nishita
      Pages: 1 - 8
      Abstract: Publication date: March 2017
      Source:Visual Informatics, Volume 1, Issue 1
      Author(s): Yoshinori Dobashi, Kei Iwasaki, Yonghao Yue, Tomoyuki Nishita
      Clouds play an important role when synthesizing realistic images of outdoor scenes. The realistic display of clouds is therefore one of the important research topics in computer graphics. In order to display realistic clouds, we need methods for modeling, rendering, and animating clouds realistically. It is also important to control the shapes and appearances of clouds to create certain visual effects. In this paper, we explain our efforts and research results to meet such requirements, together with related researches on the visual simulation of clouds.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.01.001
       
  • Support-free interior carving for 3D printing

    • Authors: Yue Xie; Xiang Chen
      Pages: 9 - 15
      Abstract: Publication date: March 2017
      Source:Visual Informatics, Volume 1, Issue 1
      Author(s): Yue Xie, Xiang Chen
      Recent interior carving methods for functional design necessitate a cumbersome cut-and-glue process in fabrication. We propose a method to generate interior voids which not only satisfy the functional purposes but are also support-free during the 3D printing process. We introduce a support-free unit structure for voxelization and derive the wall thicknesses parametrization for continuous optimization. We also design a discrete dithering algorithm to ensure the printability of ghost voxels. The interior voids are iteratively carved by alternating the optimization and dithering. We apply our method to optimize the static and rotational stability, and print various results to evaluate the efficacy.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.01.002
       
  • Image grid display: A study on automatic scrolling presentation

    • Authors: Marco Porta; Stefania Ricotti
      Pages: 16 - 24
      Abstract: Publication date: March 2017
      Source:Visual Informatics, Volume 1, Issue 1
      Author(s): Marco Porta, Stefania Ricotti
      In this paper we describe a study on image grid display with automatic vertical scrolling. While scroll operations are normally carried out manually by the user, in the context of RSVP (Rapid Serial Visual Presentation) techniques this work considers a presentation mode in which the image grid is automatically scrolled. Through experiments carried out with 50 testers, we have investigated user performance while looking for specific target subjects within large collections of images. Different numbers of columns and scrolling speeds have been considered. The search task implied both clicking on the identified target pictures and simply vocally stating their visual recognition. To this purpose, and to identify possible specific gaze behaviours, eye tracking technology has been exploited. The obtained results show that number of columns and scroll speed do affect search performance. Moreover, the user’s gaze tends to focus on different screen areas depending on the values of these two parameters. Although it is not possible to definitely find an optimal columns–speed combination that is valid in all cases, the particular context of use can suggest feasible solutions according to one’s needs. To the best of our knowledge, image grid display with automatic scrolling has never been studied to date.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.01.003
       
  • Visual exploration of movement and event data with interactive time masks

    • Authors: Natalia Andrienko; Gennady Andrienko; Elena Camossi; Christophe Claramunt; Jose Manuel Cordero Garcia; Georg Fuchs; Melita Hadzagic; Anne-Laure Jousselme; Cyril Ray; David Scarlatti; George Vouros
      Pages: 25 - 39
      Abstract: Publication date: March 2017
      Source:Visual Informatics, Volume 1, Issue 1
      Author(s): Natalia Andrienko, Gennady Andrienko, Elena Camossi, Christophe Claramunt, Jose Manuel Cordero Garcia, Georg Fuchs, Melita Hadzagic, Anne-Laure Jousselme, Cyril Ray, David Scarlatti, George Vouros
      We introduce the concept of time mask, which is a type of temporal filter suitable for selection of multiple disjoint time intervals in which some query conditions fulfil. Such a filter can be applied to time-referenced objects, such as events and trajectories, for selecting those objects or segments of trajectories that fit in one of the selected time intervals. The selected subsets of objects or segments are dynamically summarized in various ways, and the summaries are represented visually on maps and/or other displays to enable exploration. The time mask filtering can be especially helpful in analysis of disparate data (e.g., event records, positions of moving objects, and time series of measurements), which may come from different sources. To detect relationships between such data, the analyst may set query conditions on the basis of one dataset and investigate the subsets of objects and values in the other datasets that co-occurred in time with these conditions. We describe the desired features of an interactive tool for time mask filtering and present a possible implementation of such a tool. By example of analysing two real world data collections related to aviation and maritime traffic, we show the way of using time masks in combination with other types of filters and demonstrate the utility of the time mask filtering.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.01.004
       
  • Towards better analysis of machine learning models: A visual analytics
           perspective

    • Authors: Shixia Liu; Xiting Wang; Mengchen Liu; Jun Zhu
      Pages: 48 - 56
      Abstract: Publication date: March 2017
      Source:Visual Informatics, Volume 1, Issue 1
      Author(s): Shixia Liu, Xiting Wang, Mengchen Liu, Jun Zhu
      Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems. Dramatic advances in big data analytics have led to a wide variety of interactive model analysis tasks. In this paper, we present a comprehensive analysis and interpretation of this rapidly developing area. Specifically, we classify the relevant work into three categories: understanding, diagnosis, and refinement. Each category is exemplified by recent influential work. Possible future research opportunities are also explored and discussed.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.01.006
       
  • Spatio-temporal flow maps for visualizing movement and contact patterns

    • Authors: Bing Ni; Qiaomu Shen; Jiayi Xu; Huamin Qu
      Pages: 57 - 64
      Abstract: Publication date: March 2017
      Source:Visual Informatics, Volume 1, Issue 1
      Author(s): Bing Ni, Qiaomu Shen, Jiayi Xu, Huamin Qu
      The advanced telecom technologies and massive volumes of intelligent mobile phone users have yielded a huge amount of real-time data of people’s all-in-one telecommunication records, which we call telco big data. With telco data and the domain knowledge of an urban city, we are now able to analyze the movement and contact patterns of humans in an unprecedented scale. Flow map is widely used to display the movements of humans from one single source to multiple destinations by representing locations as nodes and movements as edges. However, it fails the task of visualizing both movement and contact data. In addition, analysts often need to compare and examine the patterns side by side, and do various quantitative analysis. In this work, we propose a novel spatio-temporal flow map layout to visualize when and where people from different locations move into the same places and make contact. We also propose integrating the spatiotemporal flow maps into existing spatiotemporal visualization techniques to form a suite of techniques for visualizing the movement and contact patterns. We report a potential application the proposed techniques can be applied to. The results show that our design and techniques properly unveil hidden information, while analysis can be achieved efficiently.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.01.007
       
  • Recent advances in transient imaging: A computer graphics and vision
           perspective

    • Authors: Adrian Jarabo; Belen Masia; Julio Marco; Diego Gutierrez
      Pages: 65 - 79
      Abstract: Publication date: March 2017
      Source:Visual Informatics, Volume 1, Issue 1
      Author(s): Adrian Jarabo, Belen Masia, Julio Marco, Diego Gutierrez
      Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.01.008
       
  • Message from the Editors-in-Chief

    • Authors: Hans-Peter Seidel; Kun Zhou
      First page: 80
      Abstract: Publication date: March 2017
      Source:Visual Informatics, Volume 1, Issue 1
      Author(s): Hans-Peter Seidel, Kun Zhou


      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.01.009
       
  • Exploring the design space of immersive urban analytics

    • Authors: Zhutian Chen; Yifang Wang; Tianchen Sun; Xiang Gao; Wei Chen; Zhigeng Pan; Huamin Qu; Yingcai Wu
      Abstract: Publication date: Available online 5 December 2017
      Source:Visual Informatics
      Author(s): Zhutian Chen, Yifang Wang, Tianchen Sun, Xiang Gao, Wei Chen, Zhigeng Pan, Huamin Qu, Yingcai Wu
      Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose a theoretical model to characterize the visualizations in immersive urban analytics. Furthermore, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguishes between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing work, possible future research opportunities are explored and discussed.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.11.002
       
  • Comparative eye-tracking evaluation of scatterplots and parallel
           coordinates

    • Authors: Rudolf Netzel; Jenny Vuong; Ulrich Engelke; Seán O’Donoghue; Daniel Weiskopf; Julian Heinrich
      Abstract: Publication date: Available online 2 December 2017
      Source:Visual Informatics
      Author(s): Rudolf Netzel, Jenny Vuong, Ulrich Engelke, Seán O’Donoghue, Daniel Weiskopf, Julian Heinrich
      We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates) and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots), the number of data dimensions (2, 4, 6, or 8), and the relative distance between points (15%, 20%, or 25%). For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI) based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center for parallel coordinates and skewed to the center/left side of the plot for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.11.001
       
  • Prediction-based load balancing and resolution tuning for interactive
           volume raycasting

    • Authors: Valentin Bruder; Steffen Frey; Thomas Ertl
      Abstract: Publication date: Available online 18 September 2017
      Source:Visual Informatics
      Author(s): Valentin Bruder, Steffen Frey, Thomas Ertl
      We present an integrated approach for real-time performance prediction of volume raycasting that we employ for load balancing and sampling resolution tuning. In volume rendering, the usage of acceleration techniques such as empty space skipping and early ray termination, among others, can cause significant variations in rendering performance when users adjust the camera configuration or transfer function. These variations in rendering times may result in unpleasant effects such as jerky motions or abruptly reduced responsiveness during interactive exploration. To avoid those effects, we propose an integrated approach to adapt rendering parameters according to performance needs. We assess performance-relevant data on-the-fly, for which we propose a novel technique to estimate the impact of early ray termination. On the basis of this data, we introduce a hybrid model, to achieve accurate predictions with minimal computational footprint. Our hybrid model incorporates aspects from analytical performance modeling and machine learning, with the goal to combine their respective strengths. We show the applicability of our prediction model for two different use cases: (1) to dynamically steer the sampling density in object and/or image space and (2) to dynamically distribute the workload among several different parallel computing devices. Our approach allows to reliably meet performance requirements such as a user-defined frame rate, even in the case of sudden large changes to the transfer function or the camera orientation.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.09.001
       
  • A cache-friendly sampling strategy for texture-based volume rendering on
           GPU

    • Authors: Junpeng Wang; Fei Yang; Yong Cao
      Abstract: Publication date: Available online 8 September 2017
      Source:Visual Informatics
      Author(s): Junpeng Wang, Fei Yang, Yong Cao
      The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA) of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.08.001
       
  • A visual analytics design for studying rhythm patterns from human daily
           movement data

    • Authors: Wei Zeng; Chi-Wing Fu; Stefan Müller Arisona; Simon Schubiger; Remo Burkhard; Kwan-Liu Ma
      Abstract: Publication date: Available online 31 August 2017
      Source:Visual Informatics
      Author(s): Wei Zeng, Chi-Wing Fu, Stefan Müller Arisona, Simon Schubiger, Remo Burkhard, Kwan-Liu Ma
      Human’s daily movements exhibit high regularity in a space–time context that typically forms circadian rhythms. Understanding the rhythms for human daily movements is of high interest to a variety of parties from urban planners, transportation analysts, to business strategists. In this paper, we present an interactive visual analytics design for understanding and utilizing data collected from tracking human’s movements. The resulting system identifies and visually presents frequent human movement rhythms to support interactive exploration and analysis of the data over space and time. Case studies using real-world human movement data, including massive urban public transportation data in Singapore and the MIT reality mining dataset, and interviews with transportation researches were conducted to demonstrate the effectiveness and usefulness of our system.

      PubDate: 2017-12-06T15:55:39Z
      DOI: 10.1016/j.visinf.2017.07.001
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 107.20.120.65
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016