for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Digital Investigation
  [SJR: 0.674]   [H-I: 32]   [497 followers]  Follow
    
   Full-text available via subscription Subscription journal
   ISSN (Print) 1742-2876
   Published by Elsevier Homepage  [3120 journals]
  • Alexa, did you get that' Determining the evidentiary value of data
           stored by the AmazonĀ® Echo
    • Abstract: Publication date: Available online 1 February 2018
      Source:Digital Investigation
      Author(s): Douglas A. Orr, Laura Sanchez


      PubDate: 2018-02-04T22:41:05Z
       
  • An in-depth analysis of Android malware using hybrid techniques
    • Abstract: Publication date: Available online 31 January 2018
      Source:Digital Investigation
      Author(s): Abdullah Talha Kabakus, Ibrahim Alper Dogru
      Android malware is widespread despite the effort provided by Google in order to prevent it from the official application market, Play Store. Two techniques namely static and dynamic analysis are commonly used to detect malicious applications in Android ecosystem. Both of these techniques have their own advantages and disadvantages. In this paper, we propose a novel hybrid Android malware analysis approach namely mad4a which uses the advantages of both static and dynamic analysis techniques. The aim of this study is revealing some unknown characteristics of Android malware through the used various analysis techniques. As the result of static and dynamic analysis on the widely used Android application datasets, digital investigators are informed about some underestimated characteristics of Android malware.

      PubDate: 2018-02-04T22:41:05Z
       
  • Criminal motivation on the dark web: A categorisation model for law
           enforcement
    • Abstract: Publication date: Available online 31 January 2018
      Source:Digital Investigation
      Author(s): Janis Dalins, Campbell Wilson, Mark Carman
      Research into the nature and structure of ‘Dark Webs’ such as Tor has largely focused upon manually labelling a series of crawled sites against a series of categories, sometimes using these labels as a training corpus for subsequent automated crawls. Such an approach is adequate for establishing broad taxonomies, but is of limited value for specialised tasks within the field of law enforcement. Contrastingly, existing research into illicit behaviour online has tended to focus upon particular crime types such as terrorism. A gap exists between taxonomies capable of holistic representation and those capable of detailing criminal behaviour. The absence of such a taxonomy limits interoperability between agencies, curtailing development of standardised classification tools. We introduce the Tor-use Motivation Model (TMM), a two-dimensional classification methodology specifically designed for use within a law enforcement context. The TMM achieves greater levels of granularity by explicitly distinguishing site content from motivation, providing a richer labelling schema without introducing inefficient complexity or reliance upon overly broad categories of relevance. We demonstrate this flexibility and robustness through direct examples, showing the TMM's ability to distinguish a range of unethical and illegal behaviour without bloating the model with unnecessary detail. The authors of this paper received permission from the Australian government to conduct an unrestricted crawl of Tor for research purposes, including the gathering and analysis of illegal materials such as child pornography. The crawl gathered 232,792 pages from 7651 Tor virtual domains, resulting in the collation of a wide spectrum of materials, from illicit to downright banal. Existing conceptual models and their labelling schemas were tested against a small sample of gathered data, and were observed to be either overly prescriptive or vague for law enforcement purposes - particularly when used for prioritising sites of interest for further investigation. In this paper we deploy the TMM by manually labelling a corpus of over 4000 unique Tor pages. We found a network impacted (but not dominated) by illicit commerce and money laundering, but almost completely devoid of violence and extremism. In short, criminality on this ‘dark web’ is based more upon greed and desire, rather than any particular political motivations.

      PubDate: 2018-02-04T22:41:05Z
       
  • Contents List
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22


      PubDate: 2018-02-04T22:41:05Z
       
  • The value of forensic preparedness and digital-identification expertise in
           smart society
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): Eoghan Casey


      PubDate: 2018-02-04T22:41:05Z
       
  • Future challenges for smart cities: Cyber-security and digital forensics
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): Zubair A. Baig, Patryk Szewczyk, Craig Valli, Priya Rabadia, Peter Hannay, Maxim Chernyshev, Mike Johnstone, Paresh Kerai, Ahmed Ibrahim, Krishnun Sansurooah, Naeem Syed, Matthew Peacock
      Smart cities are comprised of diverse and interconnected components constantly exchanging data and facilitating improved living for a nation's population. Our view of a typical smart city consists of four key components, namely, Smart Grids, Building Automation Systems (BAS), Unmanned Aerial Vehicles (UAVs), Smart Vehicles; with enabling Internet of Things (IoT) sensors and the Cloud platform. The adversarial threats and criminal misuses in a smart city are increasingly heterogenous and significant, with provisioning of resilient and end-to-end security being a daunting task. When a cyber incident involving critical components of the smart city infrastructure occurs, appropriate measures can be taken to identify and enumerate concrete evidence to facilitate the forensic investigation process. Forensic preparedness and lessons learned from past forensic analysis can help protect the smart city against future incidents. This paper presents a holistic view of the security landscape of a smart city, identifying security threats and providing deep insight into digital investigation in the context of the smart city.

      PubDate: 2018-02-04T22:41:05Z
       
  • Advancing coordinated cyber-investigations and tool interoperability using
           a community developed specification language
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): Eoghan Casey, Sean Barnum, Ryan Griffith, Jonathan Snyder, Harm van Beek, Alex Nelson
      Any investigation can have a digital dimension, often involving information from multiple data sources, organizations and jurisdictions. Existing approaches to representing and exchanging cyber-investigation information are inadequate, particularly when combining data sources from numerous organizations or dealing with large amounts of data from various tools. To conduct investigations effectively, there is a pressing need to harmonize how this information is represented and exchanged. This paper addresses this need for information exchange and tool interoperability with an open community-developed specification language called Cyber-investigation Analysis Standard Expression (CASE). To further promote a common structure, CASE aligns with and extends the Unified Cyber Ontology (UCO) construct, which provides a format for representing information in all cyber domains. This ontology abstracts objects and concepts that are not CASE-specific, so that they can be used across other cyber disciplines that may extend UCO. This work is a rational evolution of the Digital Forensic Analysis eXpression (DFAX) for representing digital forensic information and provenance. CASE is more flexible than DFAX and can be utilized in any context, including criminal, corporate and intelligence. CASE also builds on the Hansken data model developed and implemented by the Netherlands Forensic Institute (NFI). CASE enables the fusion of information from different organizations, data sources, and forensic tools to foster more comprehensive and cohesive analysis. This paper includes illustrative examples of how CASE can be implemented and used to capture information in a structured form to advance sharing, interoperability and analysis in cyber-investigations. In addition to capturing technical details and relationships between objects, CASE provides structure for representing and sharing details about how cyber-information was handled, transferred, processed, analyzed, and interpreted. CASE also supports data marking for sharing information at different levels of trust and classification, and for protecting sensitive and private information. Furthermore, CASE supports the sharing of knowledge related to cyber-investigations, including distinctive patterns of activity/behavior that are common across cases. This paper features a proof-of-concept Application Program Interface (API) to facilitate implementation of CASE in tools. Community members are encouraged to participate in the development and implementation of CASE and UCO.

      PubDate: 2018-02-04T22:41:05Z
       
  • Decision-theoretic file carving
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): Pavel Gladyshev, Joshua I. James
      This article explores a novel approach to file carving by viewing it as a decision problem. This allows us to design algorithms that produce best-effort results under given resource constraints. Resource-constrained carving is important for digital forensic triage, as well as for e-discovery, where a reduction in carving time may be preferred to completeness. In this work we give a formal definition of decision-theoretic file carving. As an illustration, we developed a JPEG file carving tool using the described decision-theoretic algorithm. We then examine the results of decision-theoretic file carving compared with linear carving methods to demonstrate when decision-theoretic carving is most useful.

      PubDate: 2018-02-04T22:41:05Z
       
  • Enhancing security incident response follow-up efforts with lightweight
           agile retrospectives
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): George Grispos, William Bradley Glisson, Tim Storer
      Security incidents detected by organizations are escalating in both scale and complexity. As a result, security incident response has become a critical mechanism for organizations in an effort to minimize the damage from security incidents. The final phase within many security incident response approaches is the feedback/follow-up phase. It is within this phase that an organization is expected to use information collected during an investigation in order to learn from an incident, improve its security incident response process and positively impact the wider security environment. However, recent research and security incident reports argue that organizations find it difficult to learn from incidents. A contributing factor to this learning deficiency is that industry focused security incident response approaches, typically, provide very little practical information about tools or techniques that can be used to extract lessons learned from an investigation. As a result, organizations focus on improving technical security controls and not examining or reassessing the effectiveness or efficiency of internal policies and procedures. An additional hindrance, to encouraging improvement assessments, is the absence of tools and/or techniques that organizations can implement to evaluate the impact of implemented enhancements in the wider organization. Hence, this research investigates the integration of lightweight agile retrospectives and meta-retrospectives, in a security incident response process, to enhance feedback and/or follow-up efforts. The research contribution of this paper is twofold. First, it presents an approach based on lightweight retrospectives as a means of enhancing security incident response follow-up efforts. Second, it presents an empirical evaluation of this lightweight approach in a Fortune 500 Financial organization's security incident response team.

      PubDate: 2018-02-04T22:41:05Z
       
  • A local variance based approach to alleviate the scene content
           interference for source camera identification
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): Chao Shi, Ngai-Fong Law, Frank H.F. Leung, Wan-Chi Siu
      Identifying the source camera of images is becoming increasingly important nowadays. A popular approach is to use a type of pattern noise called photo-response non-uniformity (PRNU). The noise of image contains the patterns which can be used as a fingerprint. Despite that, the PRNU-based approach is sensitive towards scene content and image intensity. The identification is poor in areas having low or saturated intensity, or in areas with complicated texture. The reliability of different regions is difficult to model in that it depends on the interaction of scene content and the characteristics of the denoising filter used to extract the noise. In this paper, we showed that the local variance of the noise residual can measure the reliability of the pixel for PRNU-based source camera identification. Hence, we proposed to use local variance to characterize the severeness of the scene content artifacts. The local variance is then incorporated to the general matched filter and peak to correlation energy (PCE) detector to provide an optimal framework for signal detection. The proposed method is tested against several state-of-art methods. The experimental results show that the local variance based approach outperformed other state-of-the-art methods in terms of identification accuracy.

      PubDate: 2018-02-04T22:41:05Z
       
  • Forensic analysis of Telegram Messenger for Windows Phone
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): J. Gregorio, A. Gardel, B. Alarcos
      This article presents a forensic analysis methodology for obtaining the digital evidence generated by one of today's many instant messaging applications, namely “Telegram Messenger” for “Windows Phone”, paying particular attention to the digital forensic artifacts produced. The paper provides an overview of this forensic analysis, while focusing particularly on how the information is structured and the user, chat and conversation data generated by the application are organised, with the goal of extracting related data from the information. The application has several other features (e.g. games, bots, stickers) besides those of an instant messaging application (e.g. messages, images, videos, files). It is therefore necessary to decode and interpret the information, which may relate to criminal offences, and establish the relation of different types of user, chat and conversation.

      PubDate: 2018-02-04T22:41:05Z
       
  • Decoding the APFS file system
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): Kurt H. Hansen, Fergus Toolan
      File systems have always played a vital role in digital forensics and during the past 30–40 years many of these have been developed to suit different needs. Some file systems are more tightly connected to a specific Operating System (OS). For instance HFS and HFS+ have been the file systems of choice in Apple devices for over 30 years. Much has happened in the evolution of storage technologies, the capacity and speed of devices has increased and Solid State Drives (SSD) are replacing traditional drives. All of these present challenges for file systems. APFS is a file system developed from first principles and will, in 2017, become the new file system for Apple devices. To date there is no available technical information about APFS and this is the motivation for this article.

      PubDate: 2018-02-04T22:41:05Z
       
  • Registration Data Access Protocol (RDAP) for digital forensic
           investigators
    • Abstract: Publication date: September 2017
      Source:Digital Investigation, Volume 22
      Author(s): Bruce Nikkel
      This paper describes the Registration Data Access Protocol (RDAP) with a focus on relevance to digital forensic investigators. RDAP was developed as the successor to the aging WHOIS system and is intended to eventually replace WHOIS as the authoritative source for registration information on IP addresses, Domain Names, Autonomous Systems, and more. RDAP uses a RESTful interface over HTTP and introduces a number of new features related to security, internationalization, and standardized query/response definitions. It is important for digital forensic investigators to become familiar with RDAP as it will play an increasingly important role in Internet investigations requiring the search and collection of registration data as evidence.

      PubDate: 2018-02-04T22:41:05Z
       
  • Prelim i - Editorial Board
    • Abstract: Publication date: December 2017
      Source:Digital Investigation, Volume 23


      PubDate: 2017-12-26T18:26:02Z
       
  • Prelim iii - Contents List
    • Abstract: Publication date: December 2017
      Source:Digital Investigation, Volume 23


      PubDate: 2017-12-26T18:26:02Z
       
  • Editorial - A smörgåsbord of digital evidence
    • Abstract: Publication date: December 2017
      Source:Digital Investigation, Volume 23
      Author(s): Eoghan Casey


      PubDate: 2017-12-26T18:26:02Z
       
  • Investigation of Indecent Images of Children cases: Challenges and
           suggestions collected from the trenches
    • Abstract: Publication date: Available online 2 December 2017
      Source:Digital Investigation
      Author(s): Virginia N.L. Franqueira, Joanne Bryce, Noora Al Mutawa, Andrew Marrington
      Previous studies examining the investigative challenges and needs of Digital Forensic (DF) practitioners have typically taken a sector-wide focus. This paper presents the results of a survey which collected text-rich comments about the challenges experienced and related suggestions for improvement in the investigation of Indecent Images of Children (IIOC) cases. The comments were provided by 153 international DF practitioners (28.1% survey response rate) and were processed using Thematic Analysis. This resulted in the identification of 4 IIOC-specific challenge themes, and 6 DF-generic challenges which directly affect IIOC. The paper discusses these identified challenges from a practitioner perspective, and outlines their suggestions for addressing them.

      PubDate: 2017-12-12T22:56:33Z
       
  • Forensic limbo: Towards subverting hard disk firmware bootkits
    • Abstract: Publication date: Available online 1 December 2017
      Source:Digital Investigation
      Author(s): Michael Gruhn
      We discuss the problem posed by malicious hard disk firmware towards forensic data acquisition. To this end, we analyzed the Western Digital WD3200AAKX model series (16 different drives) in depth and outline methods for detection and subversion of current state of the art bootkits possibly located in these particular hard disks' EEPROMs. We further extend our analysis to a total of 23 different hard drive models (16 HDDs and 7 SSDs) from 10 different vendors and provide a theoretical discussion on how hard disk rootkits residing in the firmware overlays and/or modules stored in the special storage area on a HDD called the Service Area could be detected. To this end, we outline the various debug interfacing possibilities of the various hard disk drives and how they can be used to perform a live analysis of the hard disk controller, such as dumping its memory over JTAG or UART, or how to access the Service Area via vendor specific commands over SATA.

      PubDate: 2017-12-12T22:56:33Z
       
  • A method and tool to recover data deleted from a MongoDB
    • Abstract: Publication date: Available online 21 November 2017
      Source:Digital Investigation
      Author(s): Jongseong Yoon, Sangjin Lee
      DBMS stores an important data, which is one of the important analytical subjects for analysis in digital forensics. The technique of recovering deleted data from the DBMS plays an important role in finding the evidence in forensic investigation cases. Although relational DBMS is used as important data storage until now, NoSQL DBMSs is used more often due to the growing pursue of Big Data. This increases the potential to analyze a NoSQL DMBS in forensic cases. In reality, data from approximately 26,000 servers has been deleted by a massive ransom attack on vulnerable MongoDB server. Therefore, investigation of internal structure analysis and deleted data recovery techniques of NoSQL DBMS is essential. In this paper, we research the recovery method on deleted data in MongoDB that is widely used. We have analyzed the internal structures of the WiredTiger and MMAPv1 storage engines, which are the MongoDB's disk-based storage engines. Moreover, we have implemented the recovery algorithm as a tool as well as have evaluated its performance on real and self-generated experiment data.

      PubDate: 2017-12-12T22:56:33Z
       
  • Hybrid approaches to digital forensic investigations: A comparative
           analysis in an institutional context
    • Abstract: Publication date: Available online 16 October 2017
      Source:Digital Investigation
      Author(s): Diana S. Dolliver, Carson Collins, Beau Sams
      Law enforcement agencies across the country are struggling to keep pace with processing and analyzing digital evidence seized in active criminal cases. One unique response to these challenges is the formation of a hybrid digital forensic task force: a formal partnership between higher educational institutions and municipal and/or state law enforcement agencies to further education, research, and investigatory capacity in the digital forensic field. To better understand this organizational model, this study conducted a comparative analysis between eight such task forces in the United States using the theoretical guidance of neo-institutional theory, the first such national assessment. The findings indicated that there was not one “common” model between the task forces – the number of examiners ranged from 1 to 7 or more, the average number of years the task forces were operational was 7, and the academic components varied. Despite model variation, task forces reported that the benefits related to the integration of academia with law enforcement agencies enhanced their capabilities to better serve their communities and provide a greater benefit to society.

      PubDate: 2017-10-17T15:09:56Z
       
  • Leveraging virtual machine introspection with memory forensics to detect
           and characterize unknown malware using machine learning techniques at
           hypervisor
    • Abstract: Publication date: Available online 16 October 2017
      Source:Digital Investigation
      Author(s): M.A. Ajay Kumara, C.D. Jaidhar
      The Virtual Machine Introspection (VMI) has emerged as a fine-grained, out-of-VM security solution that detects malware by introspecting and reconstructing the volatile memory state of the live guest Operating System (OS). Specifically, it functions by the Virtual Machine Monitor (VMM), or hypervisor. The reconstructed semantic details obtained by the VMI are available in a combination of benign and malicious states at the hypervisor. In order to distinguish between these two states, the existing out-of-VM security solutions require extensive manual analysis. In this paper, we propose an advanced VMM-based, guest-assisted Automated Internal-and-External (A-IntExt) introspection system by leveraging VMI, Memory Forensics Analysis (MFA), and machine learning techniques at the hypervisor. Further, we use the VMI-based technique to introspect digital artifacts of the live guest OS to obtain a semantic view of the processes details. We implemented an Intelligent Cross View Analyzer (ICVA) and implanted it into our proposed A-IntExt system, which examines the data supplied by the VMI to detect hidden, dead, and dubious processes, while also predicting early symptoms of malware execution on the introspected guest OS in a timely manner. Machine learning techniques are used to analyze the executables that are mined and extracted using MFA-based techniques and ascertain the malicious executables. The practicality of the A-IntExt system is evaluated by executing large real-world malware and benign executables onto the live guest OSs. The evaluation results achieved 99.55% accuracy and 0.004 False Positive Rate (FPR) on the 10-fold cross-validation to detect unknown malware on the generated dataset. Additionally, the proposed system was validated against other benchmarked malware datasets and the A-IntExt system outperforms the detection of real-world malware at the VMM with performance exceeding 6.3%.

      PubDate: 2017-10-17T15:09:56Z
       
  • A methodology for the security evaluation within third-party Android
           Marketplaces
    • Abstract: Publication date: Available online 13 October 2017
      Source:Digital Investigation
      Author(s): William J. Buchanan, Simone Chiale, Richard Macfarlane
      This paper aims to evaluate possible threats with unofficial Android marketplaces, and geo-localize the malware distribution over three main regions: China; Europe; and Russia. It provides a comprehensive review of existing academic literature about security in Android focusing especially on malware detection systems and existing malware databases. Through the implementation of a methodology for identification of malicious applications it has been collected data revealing a 5% of them as malicious in an overall analysis. Furthermore, the analysis shown that Russia and Europe have a preponderance of generic detections and adware, while China is found to be targeted mainly by riskware and malware.

      PubDate: 2017-10-17T15:09:56Z
       
  • A simple and effective image-statistics-based approach to detecting
           recaptured images from LCD screens
    • Abstract: Publication date: Available online 6 October 2017
      Source:Digital Investigation
      Author(s): Kai Wang
      It is now extremely easy to recapture high-resolution and high-quality images from LCD (Liquid Crystal Display) screens. Recaptured image detection is an important digital forensic problem, as image recapture is often involved in the creation of a fake image in an attempt to increase its visual plausibility. State-of-the-art image recapture forensic methods make use of strong prior knowledge about the recapturing process and are based on either the combination of a group of ad-hoc features or a specific and somehow complicated dictionary learning procedure. By contrast, we propose a conceptually simple yet effective method for recaptured image detection which is built upon simple image statistics and a very loose assumption about the recapturing process. The adopted features are pixel-wise correlation coefficients in image differential domains. Experimental results on two large databases of high-resolution, high-quality recaptured images and comparisons with existing methods demonstrate the forensic accuracy and the computational efficiency of the proposed method.

      PubDate: 2017-10-10T14:37:23Z
       
  • Identifying offenders on Twitter: A law enforcement practitioner guide
    • Abstract: Publication date: Available online 23 September 2017
      Source:Digital Investigation
      Author(s): Graeme Horsman, Kevin Ginty, Paul Cranner
      Twitter remains one of the most popular social media network sites in use today and continues to attract criticism over the volume of unsavoury and illegal content circulated by its users. When breaches of legislation occur, appropriate officials are left with the task of identifying and apprehending the physical user of an offending account, which is not always a simple task. This article provides a law enforcement practitioner focused analysis of the Twitter platform and associated services for the purposes of offender identification. Using our bespoke message harvesting tool ‘Twitterstream’, an analysis of the data available via Twitter's Streaming and REST APIs are presented, along with the message metadata which can be gleaned. The process of identifying those behind offending Twitter accounts is discussed in line with available API content and current Twitter data retention policies in order to support law enforcement investigations surrounding this social media platform.

      PubDate: 2017-09-25T02:34:36Z
       
  • Forensic analysis of Telegram Messenger on Android smartphones
    • Abstract: Publication date: Available online 20 September 2017
      Source:Digital Investigation
      Author(s): Cosimo Anglano, Massimo Canonico, Marco Guazzone
      In this paper we present a methodology for the forensic analysis of the artifacts generated on Android smartphones by Telegram Messenger, the official client for the Telegram instant messaging platform, which provides various forms of secure individual and group communication, by means of which both textual and non-textual messages can be exchanged among users, as well as voice calls. Our methodology is based on the design of a set of experiments suitable to elicit the generation of artifacts and their retention on the device storage, and on the use of virtualized smartphones to ensure the generality of the results and the full repeatability of the experiments, so that our findings can be reproduced and validated by a third-party. In this paper we show that, by using the proposed methodology, we are able (a) to identify all the artifacts generated by Telegram Messenger, (b) to decode and interpret each one of them, and (c) to correlate them in order to infer various types of information that cannot be obtained by considering each one of them in isolation. As a result, in this paper we show how to reconstruct the list of contacts, the chronology and contents of the messages that have been exchanged by users, as well as the contents of files that have been sent or received. Furthermore, we show how to determine significant properties of the various chats, groups, and channels in which the user has been involved (e.g., the identifier of the creator, the date of creation, the date of joining, etc.). Finally, we show how to reconstruct the log of the voice calls made or received by the user. Although in this paper we focus on Telegram Messenger, our methodology can be applied to the forensic analysis of any application running on the Android platform.

      PubDate: 2017-09-25T02:34:36Z
       
  • Live acquisition of main memory data from Android smartphones and
           smartwatches
    • Abstract: Publication date: Available online 19 September 2017
      Source:Digital Investigation
      Author(s): Seung Jei Yang, Jung Ho Choi, Ki Bom Kim, Rohit Bhatia, Brendan Saltaformaggio, Dongyan Xu
      Recent research in Android device forensics has largely focused on evidence recovery from NAND flash memory. However, pervasive deployment of NAND flash encryption technologies and the increase in malware infections which reside only in main memory have motivated an urgent need for the forensic study of main memory. Existing Android main memory forensics techniques are hardly being adopted in practical forensic investigations because they often require solving several usability constraints, such as requiring root privilege escalation, custom kernel replacement, or screen lock bypass. Moreover, there are still no commercially available tools for acquiring the main memory data of smart devices. To address these problems, we have developed an automated tool, called AMD, which is capable of acquiring the entire content of main memory from a range of Android smartphones and smartwatches. In developing AMD, we analyzed the firmware update protocols of these devices by reverse engineering the Android bootloader. Based on this study, we have devised a method that allows access to main memory data through the firmware update protocols. Our experimental results show that AMD overcomes the usability constraints of previous main memory acquisition approaches and that the acquired main memory data of a smartphone or smartwatch can be accurately used in forensic investigations.

      PubDate: 2017-09-25T02:34:36Z
       
  • Study on the tracking revision history of MS Word files for forensic
           investigation
    • Abstract: Publication date: Available online 8 September 2017
      Source:Digital Investigation
      Author(s): Doowon Jeong, Sangjin Lee
      Document forensics remains an important field of digital forensics. To date, previously existing methods focused on the last saved version of the document file stored on the PC; however, the drawback of this approach is that this provides no indication as to how the contents have been modified. This paper provides a novel method for document forensics based on tracking the revision history of a Microsoft Word file. The proposed method concentrates on the TMP file created when the author saves the file and the ASD file created periodically by Microsoft Word during editing. A process whereby the revision history lists are generated based on metadata of the Word, TMP, and ASD files is presented. Furthermore, we describe a technique developed to link the revision history lists based on similarity. These outcomes can provide considerable assistance to a forensic investigator trying to establish the extent to which document file contents have been changed and when the file was created, modified, deleted, and copied.

      PubDate: 2017-09-12T21:13:51Z
       
  • A novel file carving algorithm for National Marine Electronics Association
           (NMEA) logs in GPS forensics
    • Abstract: Publication date: Available online 8 September 2017
      Source:Digital Investigation
      Author(s): Kai Shi, Ming Xu, Haoxia Jin, Tong Qiao, Xue Yang, Ning Zheng, Jian Xu, Kim-Kwang Raymond Choo
      Globe positioning system (GPS) devices are an increasing importance source of evidence, as more of our devices have built-in GPS capabilities. In this paper, we propose a novel framework to efficiently recover National Marine Electronics Association (NMEA) logs and reconstruct GPS trajectories. Unlike existing approaches that require file system metadata, our proposed algorithm is designed based on the file carving technique without relying on system metadata. By understanding the characteristics and intrinsic structure of trajectory data in NMEA logs, we demonstrate how to pinpoint all data blocks belonging to the NMEA logs from the acquired forensic image of GPS device. Then, a discriminator is presented to determine whether two data blocks can be merged. And based on the discriminator, we design a reassembly algorithm to re-order and merge the obtained data blocks into new logs. In this context, deleted trajectories can be reconstructed by analyzing the recovered logs. Empirical experiments demonstrate that our proposed algorithm performs well when the system metadata is available/unavailable, log files are heavily fragmented, one or more parts of the log files are overwritten, and for different file systems of variable cluster sizes.

      PubDate: 2017-09-12T21:13:51Z
       
  • Clustering image noise patterns by embedding and visualization for common
           source camera detection
    • Abstract: Publication date: Available online 5 September 2017
      Source:Digital Investigation
      Author(s): Sonja Georgievska, Rena Bakhshi, Anand Gavai, Alessio Sclocco, Ben van Werkhoven
      We consider the problem of clustering a large set of images based on similarities of their noise patterns. Such clustering is necessary in forensic cases in which detection of common source of images is required, when the cameras are not physically available. We propose a novel method for clustering combining low dimensional embedding, visualization, and classical clustering of the dataset based on the similarity scores. We evaluate our method on the Dresden images database showing that the methodology is highly effective.

      PubDate: 2017-09-06T20:59:09Z
       
  • Contents List
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement


      PubDate: 2017-08-31T19:43:34Z
       
  • Seventeenth Annual DFRWS Conference
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement


      PubDate: 2017-08-31T19:43:34Z
       
  • Digital forensic approaches for Amazon Alexa ecosystem
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Hyunji Chung, Jungheum Park, Sangjin Lee
      Internet of Things (IoT) devices such as the Amazon Echo – a smart speaker developed by Amazon – are undoubtedly great sources of potential digital evidence due to their ubiquitous use and their always-on mode of operation, constituting a human-life's black box. The Amazon Echo in particular plays a centric role for the cloud-based intelligent virtual assistant (IVA) Alexa developed by Amazon Lab126. The Alexa-enabled wireless smart speaker is the gateway for all voice commands submitted to Alexa. Moreover, the IVA interacts with a plethora of compatible IoT devices and third-party applications that leverage cloud resources. Understanding the complex cloud ecosystem that allows ubiquitous use of Alexa is paramount on supporting digital investigations when need raises. This paper discusses methods for digital forensics pertaining to the IVA Alexa's ecosystem. The primary contribution of this paper consists of a new efficient approach of combining cloud-native forensics with client-side forensics (forensics for companion devices), to support practical digital investigations. Based on a deep understanding of the targeted ecosystem, we propose a proof-of-concept tool, CIFT, that supports identification, acquisition and analysis of both native artifacts from the cloud and client-centric artifacts from local devices (mobile applications and web browsers).

      PubDate: 2017-08-31T19:43:34Z
       
  • Leveraging the SRTP protocol for over-the-network memory acquisition of a
           GE Fanuc Series 90-30
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): George Denton, Filip Karpisek, Frank Breitinger, Ibrahim Baggili
      Programmable Logic Controllers (PLCs) are common components implemented across many industries such as manufacturing, water management, travel, aerospace and hospitals to name a few. Given their broad deployment in critical systems, they became and still are a common target for cyber attacks; the most prominent one being Stuxnet. Often PLCs (especially older ones) are only protected by an outer line of defense (e.g., a firewall) but once an attacker gains access to the system or the network, there might not be any other defense layers. In this scenario, a forensic investigator should not rely on the existing software as it might have been compromised. Therefore, we reverse engineered the GE-SRTP network protocol using a GE Fanuc Series 90-30 PLC and provide two major contributions: We first describe the Service Request Transport protocol (GE-SRTP) which was invented by General Electric (GE) and is used by many of their Ethernet connected controllers. Note, to the best of our knowledge, prior to this work, no publicly available documentation on the protocol was available affording users' security by obscurity. Second, based on our understanding of the protocol, we implemented a software application that allows direct network-based communication with the PLC (no intermediate server is needed). While the tool's forensic mode is harmless and only allows for reading registers, we discovered that one can manipulate/write to the registers in its default configuration, e.g., turn off the PLC, or manipulate the items/processes it controls.

      PubDate: 2017-08-31T19:43:34Z
       
  • SCARF: A container-based approach to cloud-scale digital forensic
           processing
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Christopher Stelly, Vassil Roussev
      The rapid growth of raw data volume requiring forensic processing has become one of the top concerns of forensic analysts. At present, there are no readily available solutions that provide: a) open and flexible integration of existing forensic tools into a processing pipeline; and b) scale-out architecture that is compatible with common cloud technologies. Containers, lightweight OS-level virtualized environments, are quickly becoming the preferred architectural unit for building large-scale data processing systems. We present a container-based software framework, SCARF, which applies this approach to forensic computations. Our prototype demonstrates its practicality by providing low-cost integration of both custom code and a variety of third-party tools via simple data interfaces. The resulting system fits well with the data parallel nature of most forensic tasks, which tend to have few dependencies that limit parallel execution. Our experimental evaluation shows that for several types of processing tasks–such as hashing, indexing and bulk processing–performance scales almost linearly with the addition of hardware resources. We show that the software engineering effort to integrate new tools is quite modest, and all the critical task scheduling and resource allocation are automatically managed by the container orchestration runtime–Docker Swarm, or similar.

      PubDate: 2017-08-31T19:43:34Z
       
  • Insights gained from constructing a large scale dynamic analysis platform
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Cody Miller, Dae Glendowne, Henry Cook, DeMarcus Thomas, Chris Lanclos, Patrick Pape
      As the number of malware samples found increases exponentially each year, there is a need for systems that can dynamically analyze thousands of malware samples per day. These systems should be reliable, scalable, and simple to use by other systems and malware analysts. When handling thousands of malware, reprocessing a small percentage of the malware due to errors can be devastating; a reliable system avoids wasting resources by reducing the number of errors. In this paper, we describe our scalable dynamic analysis platform, perform experiments on the platform, and provide lessons we have learned through the process. The platform uses Cuckoo sandbox for dynamic analysis and is improved to process malware as quickly as possible without losing valuable information. Experiments were performed to improve the configuration of the system's components and help improve the accuracy of the dynamic analysis. Lessons learned presented in the paper may aid others in the development of similar dynamic analysis systems.

      PubDate: 2017-08-31T19:43:34Z
       
  • SCADA network forensics of the PCCC protocol
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Saranyan Senthivel, Irfan Ahmed, Vassil Roussev
      Most SCADA devices have few built-in self-defence mechanisms, and tend to implicitly trust communications received over the network. Therefore, monitoring and forensic analysis of network traffic is a critical prerequisite for building an effective defense around SCADA units. In this work, we provide a comprehensive forensic analysis of network traffic generated by the PCCC(Programmable Controller Communication Commands) protocol and present a prototype tool capable of extracting both updates to programmable logic and crucial configuration information. The results of our analysis show that more than 30 files are transferred to/from the PLC when downloading/uploading a ladder logic program using RSLogix programming software including configuration and data files. Interestingly, when RSLogix compiles a ladder-logic program, it does not create any low-level representation of a ladder-logic file. However, the low-level ladder logic is present and can be extracted from the network traffic log using our prototype tool. The tool extracts SMTP configuration from the network log and parses it to obtain email addresses, username and password. The network log contains password in plain text.

      PubDate: 2017-08-31T19:43:34Z
       
  • Linux memory forensics: Dissecting the user space process heap
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Frank Block, Andreas Dewald
      The analysis of memory during a forensic investigation is often an important step to reconstruct events. While prior work in this field has mostly concentrated on information residing in the kernel space (process lists, network connections, and so on) and in particular on the Microsoft Windows operating system, this work focuses on Linux user space processes as they might also contain valuable information for an investigation. Because a lot of process data is located in the heap, this work in the first place concentrates on the analysis of Glibc's heap implementation and on how and where heap related information is stored in the virtual memory of Linux processes that use this implementation. Up to now, the heap was mostly considered a large cohesive memory region from a memory forensics perspective, making it rather hard manual work to identify relevant information inside. We introduce a Python class for the memory analysis framework Rekall that is based on our analysis results and allows access to all chunks contained in the heap and their meta information. Further, based on this class, six plugins have been developed that support an investigator in analyzing user space processes: Four of these plugins provide generic analysis capabilities such as finding information/references within chunks and dumping chunks into separate files for further investigation. These plugins have been used to reverse engineer data structures within the heap for user space processes, while illustrating how such plugins ease the whole analysis process. The remaining two plugins are a result of these user space process analyses and are extracting the command history for the zsh shell and password entry information for the password manager KeePassX.

      PubDate: 2017-08-31T19:43:34Z
       
  • Extending The Sleuth Kit and its underlying model for pooled storage file
           system forensic analysis
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Jan-Niclas Hilgert, Martin Lambertz, Daniel Plohmann
      Carrier's book File System Forensic Analysis is one of the most comprehensive sources when it comes to the forensic analysis of file systems. Published in 2005, it provides details about the most commonly used file systems of that time as well as a process model to analyze file systems in general. The Sleuth Kit is the implementation of Carrier's model and it is still widely used during forensic analyses today—standalone or as a basis for forensic suites such as Autopsy. While The Sleuth Kit is still actively maintained, the model has not seen any updates since then. Moreover, there is no support for modern file systems implementing new paradigms such as pooled storage. In this paper, we present an update to Carrier's model which enables the analysis of pooled storage file systems. To demonstrate that our model is suitable, we implemented it for ZFS—a file system for large scale storage, cloud, and virtualization environments—and show how to perform an analysis of this file system using our model and extended toolkit.

      PubDate: 2017-08-31T19:43:34Z
       
  • Gaslight: A comprehensive fuzzing architecture for memory forensics
           frameworks
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Andrew Case, Arghya Kusum Das, Seung-Jong Park, J. (Ram) Ramanujam, Golden G. Richard
      Memory forensics is now a standard component of digital forensic investigations and incident response handling, since memory forensic techniques are quite effective in uncovering artifacts that might be missed by traditional storage forensics or live analysis techniques. Because of the crucial role that memory forensics plays in investigations and because of the increasing use of automation of memory forensics techniques, it is imperative that these tools be resilient to memory smear and deliberate tampering. Without robust algorithms, malware may go undetected, frameworks may crash when attempting to process memory samples, and automation of memory forensics techniques is difficult. In this paper we present Gaslight, a powerful and flexible fuzz-testing architecture for stress-testing both open and closed-source memory forensics frameworks. Gaslight automatically targets critical code paths that process memory samples and mutates samples in an efficient way to reveal implementation errors. In experiments we conducted against several popular memory forensics frameworks, Gaslight revealed a number of critical previously undiscovered bugs.

      PubDate: 2017-08-31T19:43:34Z
       
  • Analyzing user-event data using score-based likelihood ratios with marked
           point processes
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Christopher Galbraith, Padhraic Smyth
      In this paper we investigate the application of score-based likelihood ratio techniques to the problem of detecting whether two time-stamped event streams were generated by the same source or by two different sources. We develop score functions for event data streams by building on ideas from the statistical modeling of marked point processes, focusing in particular on the coefficient of segregation and mingling index. The methodology is applied to a data set consisting of logs of computer activity over a 7-day period from 28 different individuals. Experimental results on known same-source and known different-source data sets indicate that the proposed scores have significant discriminative power in this context. The paper concludes with a discussion of the potential benefits and challenges that may arise from the application of statistical analysis to user-event data in digital forensics.

      PubDate: 2017-08-31T19:43:34Z
       
  • Time-of-recording estimation for audio recordings
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): Lilei Zheng, Ying Zhang, Chien Eao Lee, Vrizlynn L.L. Thing
      This work addresses the problem of ENF pattern matching in the task of time-of-recording estimation. Inspired by the principle of visual comparison, we propose a novel similarity criterion, the bitwise similarity, for measuring the similarity between two ENF signals. A search system is then developed to find the best matches for a given test ENF signal within a large searching scope on the reference ENF data. By empirical comparison to other popular similarity criteria, we demonstrate that the proposed method is more effective and efficient than the state-of-the-art. For example, compared with the recent DMA algorithm, our method achieves a relative error rate decrease of 86.86% (from 20.32% to 2.67%) and a speedup of 45× faster search response (41.0444 s versus 0.8973 s). Last but not least, we present a strategy of uniqueness examination to help human examiners to ensure high precision decisions, which makes our method practical in potential forensic use.

      PubDate: 2017-08-31T19:43:34Z
       
  • Carving database storage to detect and trace security breaches
    • Abstract: Publication date: August 2017
      Source:Digital Investigation, Volume 22, Supplement
      Author(s): James Wagner, Alexander Rasin, Boris Glavic, Karen Heart, Jacob Furst, Lucas Bressan, Jonathan Grier
      Database Management Systems (DBMS) are routinely used to store and process sensitive enterprise data. However, it is not possible to secure data by relying on the access control and security mechanisms (e.g., audit logs) of such systems alone – users may abuse their privileges (no matter whether granted or gained illegally) or circumvent security mechanisms to maliciously alter and access data. Thus, in addition to taking preventive measures, the major goal of database security is to 1) detect breaches and 2) to gather evidence about attacks for devising counter measures. We present an approach that evaluates the integrity of a live database, identifying and reporting evidence for log tampering. Our approach is based on forensic analysis of database storage and detection of inconsistencies between database logs and physical storage state (disk and RAM). We apply our approach to multiple DBMS to demonstrate its effectiveness in discovering malicious operations and providing detailed information about the data that was illegally accessed/modified.

      PubDate: 2017-08-31T19:43:34Z
       
  • Prelim i - Editorial Board
    • Abstract: Publication date: June 2017
      Source:Digital Investigation, Volume 21


      PubDate: 2017-06-05T14:57:04Z
       
  • Prelim iii - Contents List
    • Abstract: Publication date: June 2017
      Source:Digital Investigation, Volume 21


      PubDate: 2017-06-05T14:57:04Z
       
  • The broadening horizons of digital investigation
    • Abstract: Publication date: Available online 16 May 2017
      Source:Digital Investigation
      Author(s): Eoghan Casey


      PubDate: 2017-05-21T02:44:35Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.90.202.184
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-