for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Digital Investigation
  [SJR: 0.674]   [H-I: 32]   [460 followers]  Follow
    
   Full-text available via subscription Subscription journal
   ISSN (Print) 1742-2876
   Published by Elsevier Homepage  [3048 journals]
  • Do digital investigators have to program' A controlled experiment in
           digital investigation
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20, Supplement
      Author(s): Felix Freiling, Christian Zoubek
      We report on the results of an exploratory study in which graduate students played the role of digital investigators within an advanced digital forensics course. Overall, 39 students were split up into 10 groups. Each group had to solve one out of three arguably realistic cases within a time frame of 11 weeks. Participants had to log their actions and the corresponding time effort. The resulting data was analyzed in order to identify differences in investigative strategies as well as factors that influence the quality of the results. As can be expected, the total effort (in minutes) generally positively influences the results, but rather surprisingly, participants did not (have to) program to solve the cases although they were restricted to using publicly available tools.

      PubDate: 2017-11-08T18:11:18Z
       
  • Hybrid approaches to digital forensic investigations: A comparative
           analysis in an institutional context
    • Abstract: Publication date: Available online 16 October 2017
      Source:Digital Investigation
      Author(s): Diana S. Dolliver, Carson Collins, Beau Sams
      Law enforcement agencies across the country are struggling to keep pace with processing and analyzing digital evidence seized in active criminal cases. One unique response to these challenges is the formation of a hybrid digital forensic task force: a formal partnership between higher educational institutions and municipal and/or state law enforcement agencies to further education, research, and investigatory capacity in the digital forensic field. To better understand this organizational model, this study conducted a comparative analysis between eight such task forces in the United States using the theoretical guidance of neo-institutional theory, the first such national assessment. The findings indicated that there was not one “common” model between the task forces – the number of examiners ranged from 1 to 7 or more, the average number of years the task forces were operational was 7, and the academic components varied. Despite model variation, task forces reported that the benefits related to the integration of academia with law enforcement agencies enhanced their capabilities to better serve their communities and provide a greater benefit to society.

      PubDate: 2017-10-17T15:09:56Z
       
  • Leveraging virtual machine introspection with memory forensics to detect
           and characterize unknown malware using machine learning techniques at
           hypervisor
    • Abstract: Publication date: Available online 16 October 2017
      Source:Digital Investigation
      Author(s): M.A. Ajay Kumara, C.D. Jaidhar
      The Virtual Machine Introspection (VMI) has emerged as a fine-grained, out-of-VM security solution that detects malware by introspecting and reconstructing the volatile memory state of the live guest Operating System (OS). Specifically, it functions by the Virtual Machine Monitor (VMM), or hypervisor. The reconstructed semantic details obtained by the VMI are available in a combination of benign and malicious states at the hypervisor. In order to distinguish between these two states, the existing out-of-VM security solutions require extensive manual analysis. In this paper, we propose an advanced VMM-based, guest-assisted Automated Internal-and-External (A-IntExt) introspection system by leveraging VMI, Memory Forensics Analysis (MFA), and machine learning techniques at the hypervisor. Further, we use the VMI-based technique to introspect digital artifacts of the live guest OS to obtain a semantic view of the processes details. We implemented an Intelligent Cross View Analyzer (ICVA) and implanted it into our proposed A-IntExt system, which examines the data supplied by the VMI to detect hidden, dead, and dubious processes, while also predicting early symptoms of malware execution on the introspected guest OS in a timely manner. Machine learning techniques are used to analyze the executables that are mined and extracted using MFA-based techniques and ascertain the malicious executables. The practicality of the A-IntExt system is evaluated by executing large real-world malware and benign executables onto the live guest OSs. The evaluation results achieved 99.55% accuracy and 0.004 False Positive Rate (FPR) on the 10-fold cross-validation to detect unknown malware on the generated dataset. Additionally, the proposed system was validated against other benchmarked malware datasets and the A-IntExt system outperforms the detection of real-world malware at the VMM with performance exceeding 6.3%.

      PubDate: 2017-10-17T15:09:56Z
       
  • A methodology for the security evaluation within third-party Android
           Marketplaces
    • Abstract: Publication date: Available online 13 October 2017
      Source:Digital Investigation
      Author(s): William J. Buchanan, Simone Chiale, Richard Macfarlane
      This paper aims to evaluate possible threats with unofficial Android marketplaces, and geo-localize the malware distribution over three main regions: China; Europe; and Russia. It provides a comprehensive review of existing academic literature about security in Android focusing especially on malware detection systems and existing malware databases. Through the implementation of a methodology for identification of malicious applications it has been collected data revealing a 5% of them as malicious in an overall analysis. Furthermore, the analysis shown that Russia and Europe have a preponderance of generic detections and adware, while China is found to be targeted mainly by riskware and malware.

      PubDate: 2017-10-17T15:09:56Z
       
  • A simple and effective image-statistics-based approach to detecting
           recaptured images from LCD screens
    • Abstract: Publication date: Available online 6 October 2017
      Source:Digital Investigation
      Author(s): Kai Wang
      It is now extremely easy to recapture high-resolution and high-quality images from LCD (Liquid Crystal Display) screens. Recaptured image detection is an important digital forensic problem, as image recapture is often involved in the creation of a fake image in an attempt to increase its visual plausibility. State-of-the-art image recapture forensic methods make use of strong prior knowledge about the recapturing process and are based on either the combination of a group of ad-hoc features or a specific and somehow complicated dictionary learning procedure. By contrast, we propose a conceptually simple yet effective method for recaptured image detection which is built upon simple image statistics and a very loose assumption about the recapturing process. The adopted features are pixel-wise correlation coefficients in image differential domains. Experimental results on two large databases of high-resolution, high-quality recaptured images and comparisons with existing methods demonstrate the forensic accuracy and the computational efficiency of the proposed method.

      PubDate: 2017-10-10T14:37:23Z
       
  • Improving the reliability of chip-off forensic analysis of NAND flash
           memory devices
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20, Supplement
      Author(s): Aya Fukami, Saugata Ghose, Yixin Luo, Yu Cai, Onur Mutlu
      Digital forensic investigators often need to extract data from a seized device that contains NAND flash memory. Many such devices are physically damaged, preventing investigators from using automated techniques to extract the data stored within the device. Instead, investigators turn to chip-off analysis, where they use a thermal-based procedure to physically remove the NAND flash memory chip from the device, and access the chip directly to extract the raw data stored on the chip. We perform an analysis of the errors introduced into multi-level cell (MLC) NAND flash memory chips after the device has been seized. We make two major observations. First, between the time that a device is seized and the time digital forensic investigators perform data extraction, a large number of errors can be introduced as a result of charge leakage from the cells of the NAND flash memory (known as data retention errors). Second, when thermal-based chip removal is performed, the number of errors in the data stored within NAND flash memory can increase by two or more orders of magnitude, as the high temperature applied to the chip greatly accelerates charge leakage. We demonstrate that the chip-off analysis based forensic data recovery procedure is quite destructive, and can often render most of the data within NAND flash memory uncorrectable, and, thus, unrecoverable. To mitigate the errors introduced during the forensic recovery process, we explore a new hardware-based approach. We exploit a fine-grained read reference voltage control mechanism implemented in modern NAND flash memory chips, called read-retry, which can compensate for the charge leakage that occurs due to (1) retention loss and (2) thermal-based chip removal. The read-retry mechanism successfully reduces the number of errors, such that the original data can be fully recovered in our tested chips as long as the chips were not heavily used prior to seizure. We conclude that the read-retry mechanism should be adopted as part of the forensic data recovery process.

      PubDate: 2017-10-10T14:37:23Z
       
  • Bit-errors as a source of forensic information in NAND-flash memory
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20, Supplement
      Author(s): Jan Peter van Zandwijk
      The value of bit-errors as a source of forensic information is investigated by experiments on isolated NAND-flash chips and USB thumb-drives. Experiments on isolated NAND-flash chips, programmed directly using specialized equipment, show detectable differences in retention bit-errors over forensically relevant time periods with the device used within manufacturer specifications. In experiments with USB thumb-drives, the controller is used to load files at different times onto the drives, some of which have been subjected to stress-cycling. Retention bit-error statistics of memory pages obtained by offline analysis of NAND-flash chips from the thumb-drives are to some extent linked to the time files are loaded onto the drives. Considerable variation between USB thumb-drives makes interpretation of bit-error statistics in absolute sense difficult, although in a relative sense bit-error statistics seems to have some potential as an independent side-channel of forensic information.

      PubDate: 2017-10-10T14:37:23Z
       
  • Whitelisting system state in windows forensic memory visualizations
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20
      Author(s): Joshua A. Lapso, Gilbert L. Peterson, James S. Okolica
      Examiners in the field of digital forensics regularly encounter enormous amounts of data and must identify the few artifacts of evidentiary value. One challenge these examiners face is manual reconstruction of complex datasets with both hierarchical and associative relationships. The complexity of this data requires significant knowledge, training, and experience to correctly and efficiently examine. Current methods provide text-based representations or low-level visualizations, but levee the task of maintaining global context of system state on the examiner. This research presents a visualization tool that improves analysis methods through simultaneous representation of the hierarchical and associative relationships and local detailed data within a single page application. A novel whitelisting feature further improves analysis by eliminating items of less interest from view. Results from a pilot study demonstrate that the visualization tool can assist examiners to more accurately and quickly identify artifacts of interest.

      PubDate: 2017-10-10T14:37:23Z
       
  • Modern windows hibernation file analysis
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20
      Author(s): Joe T. Sylve, Vico Marziale, Golden G. Richard
      This paper presents the first analysis of the new hibernation file format that is used in Windows versions 8, 8.1, and 10. We also discuss several changes in the hibernation and shutdown behavior of Windows that will have a direct impact on digital forensic practitioners who use hibernation files as sources of evidence.

      PubDate: 2017-10-10T14:37:23Z
       
  • Picking up the trash: Exploiting generational GC for memory analysis
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20, Supplement
      Author(s): Adam Pridgen, Simson Garfinkel, Dan S. Wallach
      Memory analysis is slowly moving up the software stack. Early analysis efforts focused on core OS structures and services. As this field evolves, more information becomes accessible because analysis tools can build on foundational frameworks like Volatility and Rekall. This paper demonstrates and establishes memory analysis techniques for managed runtimes, namely the HotSpot Java Virtual Machine (JVM). We exploit the fact that residual artifacts remain in the JVM's heap to create basic timelines, reconstruct objects, and extract contextual information. These artifacts exist because the JVM copies objects from one place to another during garbage collection and fails to overwrite old data in a timely manner. This work focuses on the Hotspot JVM, but it can be generalized to other managed run-times like Microsoft .Net or Google's V8 JavaScript Engine.

      PubDate: 2017-10-10T14:37:23Z
       
  • Memory forensics: The path forward
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20
      Author(s): Andrew Case, Golden G. Richard
      Traditionally, digital forensics focused on artifacts located on the storage devices of computer systems, mobile phones, digital cameras, and other electronic devices. In the past decade, however, researchers have created a number of powerful memory forensics tools that expand the scope of digital forensics to include the examination of volatile memory as well. While memory forensic techniques have evolved from simple string searches to deep, structured analysis of application and kernel data structures for a number of platforms and operating systems, much research remains to be done. This paper surveys the state-of-the-art in memory forensics, provide critical analysis of current-generation techniques, describe important changes in operating systems design that impact memory forensics, and sketches important areas for further research.

      PubDate: 2017-10-10T14:37:23Z
       
  • EviPlant: An efficient digital forensic challenge creation, manipulation
           and distribution solution
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20, Supplement
      Author(s): Mark Scanlon, Xiaoyu Du, David Lillis
      Education and training in digital forensics requires a variety of suitable challenge corpora containing realistic features including regular wear-and-tear, background noise, and the actual digital traces to be discovered during investigation. Typically, the creation of these challenges requires overly arduous effort on the part of the educator to ensure their viability. Once created, the challenge image needs to be stored and distributed to a class for practical training. This storage and distribution step requires significant time and resources and may not even be possible in an online/distance learning scenario due to the data sizes involved. As part of this paper, we introduce a more capable methodology and system as an alternative to current approaches. EviPlant is a system designed for the efficient creation, manipulation, storage and distribution of challenges for digital forensics education and training. The system relies on the initial distribution of base disk images, i.e., images containing solely base operating systems. In order to create challenges for students, educators can boot the base system, emulate the desired activity and perform a “diffing” of resultant image and the base image. This diffing process extracts the modified artefacts and associated metadata and stores them in an “evidence package”. Evidence packages can be created for different personae, different wear-and-tear, different emulated crimes, etc., and multiple evidence packages can be distributed to students and integrated into the base images. A number of additional applications in digital forensic challenge creation for tool testing and validation, proficiency testing, and malware analysis are also discussed as a result of using EviPlant.

      PubDate: 2017-10-10T14:37:23Z
       
  • Identifying offenders on Twitter: A law enforcement practitioner guide
    • Abstract: Publication date: Available online 23 September 2017
      Source:Digital Investigation
      Author(s): Graeme Horsman, Kevin Ginty, Paul Cranner
      Twitter remains one of the most popular social media network sites in use today and continues to attract criticism over the volume of unsavoury and illegal content circulated by its users. When breaches of legislation occur, appropriate officials are left with the task of identifying and apprehending the physical user of an offending account, which is not always a simple task. This article provides a law enforcement practitioner focused analysis of the Twitter platform and associated services for the purposes of offender identification. Using our bespoke message harvesting tool ‘Twitterstream’, an analysis of the data available via Twitter's Streaming and REST APIs are presented, along with the message metadata which can be gleaned. The process of identifying those behind offending Twitter accounts is discussed in line with available API content and current Twitter data retention policies in order to support law enforcement investigations surrounding this social media platform.

      PubDate: 2017-09-25T02:34:36Z
       
  • Forensic analysis of Telegram Messenger on Android smartphones
    • Abstract: Publication date: Available online 20 September 2017
      Source:Digital Investigation
      Author(s): Cosimo Anglano, Massimo Canonico, Marco Guazzone
      In this paper we present a methodology for the forensic analysis of the artifacts generated on Android smartphones by Telegram Messenger, the official client for the Telegram instant messaging platform, which provides various forms of secure individual and group communication, by means of which both textual and non-textual messages can be exchanged among users, as well as voice calls. Our methodology is based on the design of a set of experiments suitable to elicit the generation of artifacts and their retention on the device storage, and on the use of virtualized smartphones to ensure the generality of the results and the full repeatability of the experiments, so that our findings can be reproduced and validated by a third-party. In this paper we show that, by using the proposed methodology, we are able (a) to identify all the artifacts generated by Telegram Messenger, (b) to decode and interpret each one of them, and (c) to correlate them in order to infer various types of information that cannot be obtained by considering each one of them in isolation. As a result, in this paper we show how to reconstruct the list of contacts, the chronology and contents of the messages that have been exchanged by users, as well as the contents of files that have been sent or received. Furthermore, we show how to determine significant properties of the various chats, groups, and channels in which the user has been involved (e.g., the identifier of the creator, the date of creation, the date of joining, etc.). Finally, we show how to reconstruct the log of the voice calls made or received by the user. Although in this paper we focus on Telegram Messenger, our methodology can be applied to the forensic analysis of any application running on the Android platform.

      PubDate: 2017-09-25T02:34:36Z
       
  • Live acquisition of main memory data from Android smartphones and
           smartwatches
    • Abstract: Publication date: Available online 19 September 2017
      Source:Digital Investigation
      Author(s): Seung Jei Yang, Jung Ho Choi, Ki Bom Kim, Rohit Bhatia, Brendan Saltaformaggio, Dongyan Xu
      Recent research in Android device forensics has largely focused on evidence recovery from NAND flash memory. However, pervasive deployment of NAND flash encryption technologies and the increase in malware infections which reside only in main memory have motivated an urgent need for the forensic study of main memory. Existing Android main memory forensics techniques are hardly being adopted in practical forensic investigations because they often require solving several usability constraints, such as requiring root privilege escalation, custom kernel replacement, or screen lock bypass. Moreover, there are still no commercially available tools for acquiring the main memory data of smart devices. To address these problems, we have developed an automated tool, called AMD, which is capable of acquiring the entire content of main memory from a range of Android smartphones and smartwatches. In developing AMD, we analyzed the firmware update protocols of these devices by reverse engineering the Android bootloader. Based on this study, we have devised a method that allows access to main memory data through the firmware update protocols. Our experimental results show that AMD overcomes the usability constraints of previous main memory acquisition approaches and that the acquired main memory data of a smartphone or smartwatch can be accurately used in forensic investigations.

      PubDate: 2017-09-25T02:34:36Z
       
  • Study on the tracking revision history of MS Word files for forensic
           investigation
    • Abstract: Publication date: Available online 8 September 2017
      Source:Digital Investigation
      Author(s): Doowon Jeong, Sangjin Lee
      Document forensics remains an important field of digital forensics. To date, previously existing methods focused on the last saved version of the document file stored on the PC; however, the drawback of this approach is that this provides no indication as to how the contents have been modified. This paper provides a novel method for document forensics based on tracking the revision history of a Microsoft Word file. The proposed method concentrates on the TMP file created when the author saves the file and the ASD file created periodically by Microsoft Word during editing. A process whereby the revision history lists are generated based on metadata of the Word, TMP, and ASD files is presented. Furthermore, we describe a technique developed to link the revision history lists based on similarity. These outcomes can provide considerable assistance to a forensic investigator trying to establish the extent to which document file contents have been changed and when the file was created, modified, deleted, and copied.

      PubDate: 2017-09-12T21:13:51Z
       
  • A novel file carving algorithm for National Marine Electronics Association
           (NMEA) logs in GPS forensics
    • Abstract: Publication date: Available online 8 September 2017
      Source:Digital Investigation
      Author(s): Kai Shi, Ming Xu, Haoxia Jin, Tong Qiao, Xue Yang, Ning Zheng, Jian Xu, Kim-Kwang Raymond Choo
      Globe positioning system (GPS) devices are an increasing importance source of evidence, as more of our devices have built-in GPS capabilities. In this paper, we propose a novel framework to efficiently recover National Marine Electronics Association (NMEA) logs and reconstruct GPS trajectories. Unlike existing approaches that require file system metadata, our proposed algorithm is designed based on the file carving technique without relying on system metadata. By understanding the characteristics and intrinsic structure of trajectory data in NMEA logs, we demonstrate how to pinpoint all data blocks belonging to the NMEA logs from the acquired forensic image of GPS device. Then, a discriminator is presented to determine whether two data blocks can be merged. And based on the discriminator, we design a reassembly algorithm to re-order and merge the obtained data blocks into new logs. In this context, deleted trajectories can be reconstructed by analyzing the recovered logs. Empirical experiments demonstrate that our proposed algorithm performs well when the system metadata is available/unavailable, log files are heavily fragmented, one or more parts of the log files are overwritten, and for different file systems of variable cluster sizes.

      PubDate: 2017-09-12T21:13:51Z
       
  • Clustering image noise patterns by embedding and visualization for common
           source camera detection
    • Abstract: Publication date: Available online 5 September 2017
      Source:Digital Investigation
      Author(s): Sonja Georgievska, Rena Bakhshi, Anand Gavai, Alessio Sclocco, Ben van Werkhoven
      We consider the problem of clustering a large set of images based on similarities of their noise patterns. Such clustering is necessary in forensic cases in which detection of common source of images is required, when the cameras are not physically available. We propose a novel method for clustering combining low dimensional embedding, visualization, and classical clustering of the dataset based on the similarity scores. We evaluate our method on the Dresden images database showing that the methodology is highly effective.

      PubDate: 2017-09-06T20:59:09Z
       
  • Contents List
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement


      PubDate: 2017-08-31T19:43:34Z
       
  • Seventeenth Annual DFRWS Conference
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement


      PubDate: 2017-08-31T19:43:34Z
       
  • Digital forensic approaches for Amazon Alexa ecosystem
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Hyunji Chung, Jungheum Park, Sangjin Lee
      Internet of Things (IoT) devices such as the Amazon Echo – a smart speaker developed by Amazon – are undoubtedly great sources of potential digital evidence due to their ubiquitous use and their always-on mode of operation, constituting a human-life's black box. The Amazon Echo in particular plays a centric role for the cloud-based intelligent virtual assistant (IVA) Alexa developed by Amazon Lab126. The Alexa-enabled wireless smart speaker is the gateway for all voice commands submitted to Alexa. Moreover, the IVA interacts with a plethora of compatible IoT devices and third-party applications that leverage cloud resources. Understanding the complex cloud ecosystem that allows ubiquitous use of Alexa is paramount on supporting digital investigations when need raises. This paper discusses methods for digital forensics pertaining to the IVA Alexa's ecosystem. The primary contribution of this paper consists of a new efficient approach of combining cloud-native forensics with client-side forensics (forensics for companion devices), to support practical digital investigations. Based on a deep understanding of the targeted ecosystem, we propose a proof-of-concept tool, CIFT, that supports identification, acquisition and analysis of both native artifacts from the cloud and client-centric artifacts from local devices (mobile applications and web browsers).

      PubDate: 2017-08-31T19:43:34Z
       
  • Leveraging the SRTP protocol for over-the-network memory acquisition of a
           GE Fanuc Series 90-30
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): George Denton, Filip Karpisek, Frank Breitinger, Ibrahim Baggili
      Programmable Logic Controllers (PLCs) are common components implemented across many industries such as manufacturing, water management, travel, aerospace and hospitals to name a few. Given their broad deployment in critical systems, they became and still are a common target for cyber attacks; the most prominent one being Stuxnet. Often PLCs (especially older ones) are only protected by an outer line of defense (e.g., a firewall) but once an attacker gains access to the system or the network, there might not be any other defense layers. In this scenario, a forensic investigator should not rely on the existing software as it might have been compromised. Therefore, we reverse engineered the GE-SRTP network protocol using a GE Fanuc Series 90-30 PLC and provide two major contributions: We first describe the Service Request Transport protocol (GE-SRTP) which was invented by General Electric (GE) and is used by many of their Ethernet connected controllers. Note, to the best of our knowledge, prior to this work, no publicly available documentation on the protocol was available affording users' security by obscurity. Second, based on our understanding of the protocol, we implemented a software application that allows direct network-based communication with the PLC (no intermediate server is needed). While the tool's forensic mode is harmless and only allows for reading registers, we discovered that one can manipulate/write to the registers in its default configuration, e.g., turn off the PLC, or manipulate the items/processes it controls.

      PubDate: 2017-08-31T19:43:34Z
       
  • SCARF: A container-based approach to cloud-scale digital forensic
           processing
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Christopher Stelly, Vassil Roussev
      The rapid growth of raw data volume requiring forensic processing has become one of the top concerns of forensic analysts. At present, there are no readily available solutions that provide: a) open and flexible integration of existing forensic tools into a processing pipeline; and b) scale-out architecture that is compatible with common cloud technologies. Containers, lightweight OS-level virtualized environments, are quickly becoming the preferred architectural unit for building large-scale data processing systems. We present a container-based software framework, SCARF, which applies this approach to forensic computations. Our prototype demonstrates its practicality by providing low-cost integration of both custom code and a variety of third-party tools via simple data interfaces. The resulting system fits well with the data parallel nature of most forensic tasks, which tend to have few dependencies that limit parallel execution. Our experimental evaluation shows that for several types of processing tasks–such as hashing, indexing and bulk processing–performance scales almost linearly with the addition of hardware resources. We show that the software engineering effort to integrate new tools is quite modest, and all the critical task scheduling and resource allocation are automatically managed by the container orchestration runtime–Docker Swarm, or similar.

      PubDate: 2017-08-31T19:43:34Z
       
  • Insights gained from constructing a large scale dynamic analysis platform
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Cody Miller, Dae Glendowne, Henry Cook, DeMarcus Thomas, Chris Lanclos, Patrick Pape
      As the number of malware samples found increases exponentially each year, there is a need for systems that can dynamically analyze thousands of malware samples per day. These systems should be reliable, scalable, and simple to use by other systems and malware analysts. When handling thousands of malware, reprocessing a small percentage of the malware due to errors can be devastating; a reliable system avoids wasting resources by reducing the number of errors. In this paper, we describe our scalable dynamic analysis platform, perform experiments on the platform, and provide lessons we have learned through the process. The platform uses Cuckoo sandbox for dynamic analysis and is improved to process malware as quickly as possible without losing valuable information. Experiments were performed to improve the configuration of the system's components and help improve the accuracy of the dynamic analysis. Lessons learned presented in the paper may aid others in the development of similar dynamic analysis systems.

      PubDate: 2017-08-31T19:43:34Z
       
  • SCADA network forensics of the PCCC protocol
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Saranyan Senthivel, Irfan Ahmed, Vassil Roussev
      Most SCADA devices have few built-in self-defence mechanisms, and tend to implicitly trust communications received over the network. Therefore, monitoring and forensic analysis of network traffic is a critical prerequisite for building an effective defense around SCADA units. In this work, we provide a comprehensive forensic analysis of network traffic generated by the PCCC(Programmable Controller Communication Commands) protocol and present a prototype tool capable of extracting both updates to programmable logic and crucial configuration information. The results of our analysis show that more than 30 files are transferred to/from the PLC when downloading/uploading a ladder logic program using RSLogix programming software including configuration and data files. Interestingly, when RSLogix compiles a ladder-logic program, it does not create any low-level representation of a ladder-logic file. However, the low-level ladder logic is present and can be extracted from the network traffic log using our prototype tool. The tool extracts SMTP configuration from the network log and parses it to obtain email addresses, username and password. The network log contains password in plain text.

      PubDate: 2017-08-31T19:43:34Z
       
  • Linux memory forensics: Dissecting the user space process heap
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Frank Block, Andreas Dewald
      The analysis of memory during a forensic investigation is often an important step to reconstruct events. While prior work in this field has mostly concentrated on information residing in the kernel space (process lists, network connections, and so on) and in particular on the Microsoft Windows operating system, this work focuses on Linux user space processes as they might also contain valuable information for an investigation. Because a lot of process data is located in the heap, this work in the first place concentrates on the analysis of Glibc's heap implementation and on how and where heap related information is stored in the virtual memory of Linux processes that use this implementation. Up to now, the heap was mostly considered a large cohesive memory region from a memory forensics perspective, making it rather hard manual work to identify relevant information inside. We introduce a Python class for the memory analysis framework Rekall that is based on our analysis results and allows access to all chunks contained in the heap and their meta information. Further, based on this class, six plugins have been developed that support an investigator in analyzing user space processes: Four of these plugins provide generic analysis capabilities such as finding information/references within chunks and dumping chunks into separate files for further investigation. These plugins have been used to reverse engineer data structures within the heap for user space processes, while illustrating how such plugins ease the whole analysis process. The remaining two plugins are a result of these user space process analyses and are extracting the command history for the zsh shell and password entry information for the password manager KeePassX.

      PubDate: 2017-08-31T19:43:34Z
       
  • Extending The Sleuth Kit and its underlying model for pooled storage file
           system forensic analysis
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Jan-Niclas Hilgert, Martin Lambertz, Daniel Plohmann
      Carrier's book File System Forensic Analysis is one of the most comprehensive sources when it comes to the forensic analysis of file systems. Published in 2005, it provides details about the most commonly used file systems of that time as well as a process model to analyze file systems in general. The Sleuth Kit is the implementation of Carrier's model and it is still widely used during forensic analyses today—standalone or as a basis for forensic suites such as Autopsy. While The Sleuth Kit is still actively maintained, the model has not seen any updates since then. Moreover, there is no support for modern file systems implementing new paradigms such as pooled storage. In this paper, we present an update to Carrier's model which enables the analysis of pooled storage file systems. To demonstrate that our model is suitable, we implemented it for ZFS—a file system for large scale storage, cloud, and virtualization environments—and show how to perform an analysis of this file system using our model and extended toolkit.

      PubDate: 2017-08-31T19:43:34Z
       
  • Gaslight: A comprehensive fuzzing architecture for memory forensics
           frameworks
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Andrew Case, Arghya Kusum Das, Seung-Jong Park, J. (Ram) Ramanujam, Golden G. Richard
      Memory forensics is now a standard component of digital forensic investigations and incident response handling, since memory forensic techniques are quite effective in uncovering artifacts that might be missed by traditional storage forensics or live analysis techniques. Because of the crucial role that memory forensics plays in investigations and because of the increasing use of automation of memory forensics techniques, it is imperative that these tools be resilient to memory smear and deliberate tampering. Without robust algorithms, malware may go undetected, frameworks may crash when attempting to process memory samples, and automation of memory forensics techniques is difficult. In this paper we present Gaslight, a powerful and flexible fuzz-testing architecture for stress-testing both open and closed-source memory forensics frameworks. Gaslight automatically targets critical code paths that process memory samples and mutates samples in an efficient way to reveal implementation errors. In experiments we conducted against several popular memory forensics frameworks, Gaslight revealed a number of critical previously undiscovered bugs.

      PubDate: 2017-08-31T19:43:34Z
       
  • Analyzing user-event data using score-based likelihood ratios with marked
           point processes
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Christopher Galbraith, Padhraic Smyth
      In this paper we investigate the application of score-based likelihood ratio techniques to the problem of detecting whether two time-stamped event streams were generated by the same source or by two different sources. We develop score functions for event data streams by building on ideas from the statistical modeling of marked point processes, focusing in particular on the coefficient of segregation and mingling index. The methodology is applied to a data set consisting of logs of computer activity over a 7-day period from 28 different individuals. Experimental results on known same-source and known different-source data sets indicate that the proposed scores have significant discriminative power in this context. The paper concludes with a discussion of the potential benefits and challenges that may arise from the application of statistical analysis to user-event data in digital forensics.

      PubDate: 2017-08-31T19:43:34Z
       
  • Time-of-recording estimation for audio recordings
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Lilei Zheng, Ying Zhang, Chien Eao Lee, Vrizlynn L.L. Thing
      This work addresses the problem of ENF pattern matching in the task of time-of-recording estimation. Inspired by the principle of visual comparison, we propose a novel similarity criterion, the bitwise similarity, for measuring the similarity between two ENF signals. A search system is then developed to find the best matches for a given test ENF signal within a large searching scope on the reference ENF data. By empirical comparison to other popular similarity criteria, we demonstrate that the proposed method is more effective and efficient than the state-of-the-art. For example, compared with the recent DMA algorithm, our method achieves a relative error rate decrease of 86.86% (from 20.32% to 2.67%) and a speedup of 45× faster search response (41.0444 s versus 0.8973 s). Last but not least, we present a strategy of uniqueness examination to help human examiners to ensure high precision decisions, which makes our method practical in potential forensic use.

      PubDate: 2017-08-31T19:43:34Z
       
  • Carving database storage to detect and trace security breaches
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): James Wagner, Alexander Rasin, Boris Glavic, Karen Heart, Jacob Furst, Lucas Bressan, Jonathan Grier
      Database Management Systems (DBMS) are routinely used to store and process sensitive enterprise data. However, it is not possible to secure data by relying on the access control and security mechanisms (e.g., audit logs) of such systems alone – users may abuse their privileges (no matter whether granted or gained illegally) or circumvent security mechanisms to maliciously alter and access data. Thus, in addition to taking preventive measures, the major goal of database security is to 1) detect breaches and 2) to gather evidence about attacks for devising counter measures. We present an approach that evaluates the integrity of a live database, identifying and reporting evidence for log tampering. Our approach is based on forensic analysis of database storage and detection of inconsistencies between database logs and physical storage state (disk and RAM). We apply our approach to multiple DBMS to demonstrate its effectiveness in discovering malicious operations and providing detailed information about the data that was illegally accessed/modified.

      PubDate: 2017-08-31T19:43:34Z
       
  • Prelim i - Editorial Board
    • Abstract: Publication date: June 2017
      Source:Digital Investigation, Volume 21


      PubDate: 2017-06-05T14:57:04Z
       
  • Prelim iii - Contents List
    • Abstract: Publication date: June 2017
      Source:Digital Investigation, Volume 21


      PubDate: 2017-06-05T14:57:04Z
       
  • The broadening horizons of digital investigation
    • Abstract: Publication date: Available online 16 May 2017
      Source:Digital Investigation
      Author(s): Eoghan Casey


      PubDate: 2017-05-21T02:44:35Z
       
  • Comments on “A method and a case study for the selection of the best
           available tool for mobile device forensics using decision analysis”
           [Digit Investig 16S, S55–S64]
    • Abstract: Publication date: Available online 11 May 2017
      Source:Digital Investigation
      Author(s): Shahzad Saleem, Oliver Popov, Ibrahim Baggili


      PubDate: 2017-05-16T02:37:28Z
       
  • A survey of current social network and online communication provision
           policies to support law enforcement identify offenders
    • Abstract: Publication date: Available online 8 May 2017
      Source:Digital Investigation
      Author(s): Graeme Horsman
      Online forms of harassment, stalking and bullying on social network and communication platforms are now arguably wide-spread and subject to regular media coverage. As these provision continue to attract millions of users, generating significant volumes of traffic, regulating abuse and effectively reprimanding those who are involved in it, is a difficult and sometimes impossible task. This article collates information acquired from 22 popular social network and communication platforms in order to identify current regulatory gaps. Terms of service and privacy policies are reviewed to assess existing practices of data retention to evaluate the feasibility of law enforcement officials tracking those whose actions breach the law. For each provision, account sign-up processes are evaluated and policies for retaining Internet Protocol logs and user account information are assessed along with the availability of account preservation orders. Finally, recommendations are offered for improving current approaches to regulating social network crime and online offender tracking.

      PubDate: 2017-05-10T20:37:39Z
       
  • Graph clustering and anomaly detection of access control log for forensic
           purposes
    • Abstract: Publication date: Available online 3 May 2017
      Source:Digital Investigation
      Author(s): Hudan Studiawan, Christian Payne, Ferdous Sohel
      Attacks on operating system access control have become a significant and increasingly common problem. This type of security threat is recorded in a forensic artifact such as an authentication log. Forensic investigators will generally examine the log to analyze such incidents. An anomaly is highly correlated to an attacker's attempts to compromise the system. In this paper, we propose a novel method to automatically detect an anomaly in the access control log of an operating system. The logs will be first preprocessed and then clustered using an improved MajorClust algorithm to get a better cluster. This technique provides parameter-free clustering so that it automatically can produce an analysis report for the forensic investigators. The clustering results will be checked for anomalies based on a score that considers some factors such as the total members in a cluster, the frequency of the events in the log file, and the inter-arrival time of a specific activity. We also provide a graph-based visualization of logs to assist the investigators with easy analysis. Experimental results compiled on an open dataset of a Linux authentication log show that the proposed method achieved the accuracy of 83.14% in the authentication log dataset.

      PubDate: 2017-05-06T15:56:22Z
       
  • Contents List
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20


      PubDate: 2017-04-02T10:14:14Z
       
  • Contents List
    • Abstract: Publication date: March 2017
      Source:Digital Investigation, Volume 20, Supplement


      PubDate: 2017-03-25T14:30:16Z
       
  • Advances in volatile memory forensics
    • Abstract: Publication date: Available online 10 March 2017
      Source:Digital Investigation
      Author(s): Bradley Schatz, Michael Cohen


      PubDate: 2017-03-11T00:58:17Z
       
  • Corrigendum to ‘OBA2: An Onion approach to Binary code Authorship
           Attribution’ [Digit Investig 11 (2014) S94–S103]
    • Abstract: Publication date: Available online 23 February 2017
      Source:Digital Investigation
      Author(s): Saed Alrabaee, Noman Saleem, Stere Preda, Lingyu Wang, Mourad Debbabi


      PubDate: 2017-02-25T14:52:53Z
       
  • Detection of upscale-crop and splicing for digital video authentication
    • Abstract: Publication date: Available online 16 January 2017
      Source:Digital Investigation
      Author(s): Raahat Devender Singh, Naveen Aggarwal
      The eternal preoccupation with multimedia technology is the precursor of us becoming a civilization replete with astonishing miscellanea of digital audio-visual information. Not long ago, this digital information (images and videos especially) savored the unique status of ‘definitive proof of occurrence of events’. However, given their susceptibility to malicious modifications, this status is rapidly depreciating. In sensitive areas like intelligence and surveillance, reliance on manipulated visual data could be detrimental. The disparity between the ever-growing importance of digital content and the suspicions regarding their vulnerability to alterations has made it necessary to determine whether or not the contents of a given digital image or video can be considered trustworthy. Digital videos are prone to several kinds of tamper attacks, but on a broad scale these can be categorized as either inter-frame forgeries, where the arrangement of frames in a video is manipulated, or intra-frame forgeries, where the content of the individual frames is manipulated. Intra-frame forgeries are simply digital image forgeries performed on the individual frames of the video. Upscale-crop and splicing are two intra-frame forgeries, both of which are performed via an image processing operation known as resampling. While the challenge of resampling detection in digital images has remained at the receiving end of much innovation over the past two decades, detection of resampling in digital videos has been regarded with little attention. With the intent of ameliorating this situation, in this paper, we propose a forensic system capable of validating the authenticity of digital videos by establishing if any of its frames or regions of frames have undergone post-production resampling. The system integrates the outcomes of pixel-correlation inspection and noise-inconsistency analysis; the operation of the system as a whole overcomes the limitations usually faced by these individual analyses. The proposed system has been extensively tested on a large dataset consisting of digital videos and images compressed using different codecs at different bit-rates and scaling factors, by varying noise and tampered region sizes. Empirical evidence gathered over this dataset suggests good efficacy of the system in different conditions.

      PubDate: 2017-01-17T21:00:56Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 23.20.129.162
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016