Journal Cover Digital Investigation
  [SJR: 0.674]   [H-I: 32]   [504 followers]  Follow
    
   Full-text available via subscription Subscription journal
   ISSN (Print) 1742-2876
   Published by Elsevier Homepage  [3177 journals]
  • Using computed similarity of distinctive digital traces to evaluate
           non-obvious links and repetitions in cyber-investigations
    • Authors: Timothy Bollé; Eoghan Casey
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Timothy Bollé, Eoghan Casey
      This work addresses the challenge of discerning non-exact or non-obvious similarities between cybercrimes, proposing a new approach to finding linkages and repetitions across cases in a cyber-investigation context using near similarity calculation of distinctive digital traces. A prototype system was developed to test the proposed approach, and the system was evaluated using digital traces collected during actual cyber-investigations. The prototype system also links cases on the basis of exact similarity between technical characteristics. This work found that the introduction of near similarity helps to confirm already existing links, and exposes additional linkages between cases. Automatic detection of near similarities across cybercrimes gives digital investigators a better understanding of the criminal context and the actual phenomenon, and can reveal a series of related offenses. Using case data from 207 cyber-investigations, this study evaluated the effectiveness of computing similarity between cases by applying string similarity algorithms to email addresses. The Levenshtein algorithm was selected as the best algorithm to segregate similar email addresses from non-similar ones. This work can be extended to other digital traces common in cybercrimes such as URLs and domain names. In addition to finding linkages between related cybercrime at a technical level, similarities in patterns across cases provided insights at a behavioral level such as modus operandi (MO). This work also addresses the step that comes after the similarity computation, which is the linkage verification and the hypothesis formation. For forensic purposes, it is necessary to confirm that a near match with the similarity algorithm actually corresponds to a real relation between observed characteristics, and it is important to evaluate the likelihood that the disclosed similarity supports the hypothesis of the link between cases. This work recommends additional information, including certain technical, contextual and behavioral characteristics that could be collected routinely in cyber-investigations to support similarity computation and link evaluation.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.002
      Issue No: Vol. 24 (2018)
       
  • The reliability of clocks as digital evidence under low voltage conditions
    • Authors: Jens-Petter Sandvik; André Årnes
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Jens-Petter Sandvik, André Årnes
      Battery powered electronic devices like mobile phones are abundant in the world today, and such devices are often subject to digital forensic examinations. In this paper, we show that the assumptions that clocks are close to correct can be misleading under some circumstances, especially with failing batteries. One of four tested devices showed the clock jumped 8 and 12 years into the future when the battery connector voltage was held at 2.030 V and 2.100 V for about 9 s. Other devices showed a more expected behavior, where the clocks were slowly lagging until it was reset. In addition to this, we tested the precision of some methods of documenting the clock settings, and found most timestamps to be within reasonable precision for forensic use. Finally, we describe a model for the variability of the timestamps examined.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.003
      Issue No: Vol. 24 (2018)
       
  • Styx: Countering robust memory acquisition
    • Authors: Ralph Palutke; Felix Freiling
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Ralph Palutke, Felix Freiling
      Images of main memory are an increasingly important piece of evidence in cybercrime investigations, especially against advanced malware threats, and software tools that dump memory during normal system operation are the most common way to acquire memory images today. Of all proposed methods, Stüttgen and Cohen's robust memory acquistion (as implemented in the pmem tool) can be considered the most advanced technique today. This paper presents Styx, of a proof-of-concept system that perfectly covers its traces against pmem and other tools that perform software-based forensic memory acquisition. Styx is implemented as a loadable kernel module and is able to subvert running 64-bit Linux systems using Intel's VT-x hardware virtualization extension, without requiring the system to reboot. It further uses the second address translation via Intel's EPT to hide behind hidden memory. While exhibiting the limitations of robust memory acquisition, it also shows the potential of undetectable forensic analysis software.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.004
      Issue No: Vol. 24 (2018)
       
  • OpenForensics: A digital forensics GPU pattern matching approach for the
           21st century
    • Authors: E. Bayne; R.I. Ferguson; A.T. Sampson
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): E. Bayne, R.I. Ferguson, A.T. Sampson
      Pattern matching is a crucial component employed in many digital forensic (DF) analysis techniques, such as file-carving. The capacity of storage available on modern consumer devices has increased substantially in the past century, making pattern matching approaches of current generation DF tools increasingly ineffective in performing timely analyses on data seized in a DF investigation. As pattern matching is a trivally parallelisable problem, general purpose programming on graphic processing units (GPGPU) is a natural fit for this problem. This paper presents a pattern matching framework – OpenForensics – that demonstrates substantial performance improvements from the use of modern parallelisable algorithms and graphic processing units (GPUs) to search for patterns within forensic images and local storage devices.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.005
      Issue No: Vol. 24 (2018)
       
  • Nugget: A digital forensics language
    • Authors: Christopher Stelly; Vassil Roussev
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Christopher Stelly, Vassil Roussev
      One of the long-standing conceptual problems in digital forensics is the dichotomy between the imperative for verifiable and reproducible forensic computations, and the lack of adequate mechanisms to accomplish these goals. With over thirty years of professional practice, investigator notes are still the main source of reproducibility information, and much of it is tied to the functions of specific, often proprietary, tools. In this work, we discuss the design and implementation of a domain specific language (DSL) called nugget, which aims to enable the practical formal specification of digital forensic computations in a tool-agnostic fashion. The core idea of DSLs, such as SQL, is to create an intuitive means for domain experts to describe what computation needs to be performed while abstracting away the technical means of its implementation. In the context of digital forensics, nugget aims to address the following requirements: 1) provide investigators with the means to easily and completely specify the data flow of a forensic inquiry from data source to final results; 2) allow the fully automatic (and optimized) execution of the forensic computation; 3) provide a complete, formal, and auditable log of the inquiry.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.006
      Issue No: Vol. 24 (2018)
       
  • MalDozer: Automatic framework for android malware detection using deep
           learning
    • Authors: ElMouatez Billah Karbab; Mourad Debbabi; Abdelouahid Derhab; Djedjiga Mouheb
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): ElMouatez Billah Karbab, Mourad Debbabi, Abdelouahid Derhab, Djedjiga Mouheb
      Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app's API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1 K to 33 K malware apps, and 38 K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%–99% and a false positive rate of 0.06%–2%, under all tested datasets and settings.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.007
      Issue No: Vol. 24 (2018)
       
  • Forensics acquisition — Analysis and circumvention of samsung secure
           boot enforced common criteria mode
    • Authors: Gunnar Alendal; Geir Olav Dyrkolbotn; Stefan Axelsson
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Gunnar Alendal, Geir Olav Dyrkolbotn, Stefan Axelsson
      The acquisition of data from mobile phones have been a mainstay of criminal digital forensics for a number of years now. However, this forensic acquisition is getting more and more difficult with the increasing security level and complexity of mobile phones (and other embedded devices). In addition, it is often difficult or impossible to get access to design specifications, documentation and source code. As a result, the forensic acquisition methods are also increasing in complexity, requiring an ever deeper understanding of the underlying technology and its security mechanisms. Forensic acquisition techniques are turning to more offensive solutions to bypass security mechanisms, through security vulnerabilities. Common Criteria mode is a security feature that increases the security level of Samsung devices, and thus make forensic acquisition more difficult for law enforcement. With no access to design documents or source code, we have reverse engineered how the Common Criteria mode is actually implemented and protected by Samsung's secure bootloader. We present how this security mode is enforced, security vulnerabilities therein, and how the discovered security vulnerabilities can be used to circumvent Common Criteria mode for further forensic acquisition.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.008
      Issue No: Vol. 24 (2018)
       
  • Forensic framework to identify local vs synced artefacts
    • Authors: Jacques Boucher; Nhien-An Le-Khac
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Jacques Boucher, Nhien-An Le-Khac
      Today, application developers strive to make a user's experience seamless as they move from one device to the next by synchronizing the user's data between the devices. With the ever-increasing proliferation of Internet connected devices we can expect to see greater integration and synchronization between these devices. The end user benefits of this seamless synchronization of data between devices. The synchronization of data between devices translates to both a benefit and a challenge for computer forensic examiners. The benefit is that the device being analyzed may contain evidence that synced from another device that cannot be found. The challenge for a computer forensic examiner is that the device being analyzed may contain evidence that synced from another device. In most jurisdictions police must prove mens rea, the intention or knowledge of wrongdoing. It is a challenge for examiners if a user claims that the evidence found on their laptop was created by an unknown user on another device, and that activity synced to their laptop. There is very little research on synchronization of data between devices in literature. Therefore, in this paper, we propose a framework to guide computer forensic examiners in their quest to determine if data is local or synced. We also demonstrate the application of our framework on a known scenario to evaluate the confidence an analyst can attribute to each section of the framework, and caveats that need to be considered when forming an opinion on whether data is local or synced.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.009
      Issue No: Vol. 24 (2018)
       
  • Educating judges, prosecutors and lawyers in the use of digital forensic
           experts
    • Authors: Hans Henseler; Sophie van Loenhout
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Hans Henseler, Sophie van Loenhout
      Recent years have seen an exponential growth of evidence in digital forensic investigations. Digital Forensics (DF) experts are predicting, amongst others, a ’digital explosion’ of ransomware in the coming years. The legal community must be prepared to deal with an increase of digital evidence in both volume and complexity. In cooperation with experts in the field, the Netherlands Register of Court Experts (NRGD) has recently developed standards and registration requirements for DF experts in the Netherlands. This article describes how these standards were established and provides insight into the requirements that a DF expert should meet to qualify as an NRGD registered expert. Registration is now open to all DF experts, both Dutch and non-Dutch. Furthermore, this article can be used by DF experts worldwide to educate judges, prosecutors and lawyers that make use of their reports. It illustrates what the legal community can expect from DF court experts, it provides a demarcation of the DF field based on DF literature and it presents examples of relevant questions that can or should be asked to a DF expert.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.010
      Issue No: Vol. 24 (2018)
       
  • Controlled experiments in digital evidence tampering
    • Authors: Felix Freiling; Leonhard Hösch
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Felix Freiling, Leonhard Hösch
      We report on a sequence of experiments performed with graduate level students on the tampering of digital evidence. The task of the study participants was to manipulate a given disk image so that it looked as if a website had been accessed and images downloaded in the past. Later, the same students had to distinguish their forgeries from a set of originals in which the images actually had been downloaded. During all parts of the experiment, efforts were recorded in project diaries. Overall, the results show that the tampering task was difficult since none of the forgeries was taken as an original. Furthermore, the analysis effort to detect forgeries consistently was below the effort to create the forgery even in the worst case scenario where the manipulator had full control over the evidence. It also required generally less effort to correctly classify an original than to correctly classify a forgery. Additionally, we derived results confirming that the effort to construct consistently manipulated evidence increases with decreasing control, i.e., the ability to precisely act upon the evidence.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.011
      Issue No: Vol. 24 (2018)
       
  • A comparative study on data protection legislations and government
           standards to implement Digital Forensic Readiness as mandatory requirement
           
    • Authors: Sungmi Park; Nikolay Akatyev; Yunsik Jang; Jisoo Hwang; Donghyun Kim; Woonseon Yu; Hyunwoo Shin; Changhee Han; Jonghyun Kim
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Sungmi Park, Nikolay Akatyev, Yunsik Jang, Jisoo Hwang, Donghyun Kim, Woonseon Yu, Hyunwoo Shin, Changhee Han, Jonghyun Kim
      Many data breaches happened due to poor implementation or complete absence of security controls in private companies as well as in government organizations. Many countries work on improvement of security requirements and implementing them in their legislation. However, most of the security frameworks are reactive and do not address relevant threats. The existing research suggests Digital Forensic Readiness as proactive measures, but there is only one example of its implementation as a policy. Our work surveys the current state of data protection legislation in the selected countries and their initiatives for the implementation of Digital Forensic Readiness. Then we discuss if Digital Forensic Readiness as a mandatory requirement can improve data protection state in both public and private sectors, evaluating possible challenges. We contribute suggestions for the adoption of Digital Forensic Readiness as a mandatory requirement for private companies and government organizations.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.012
      Issue No: Vol. 24 (2018)
       
  • Building stack traces from memory dump of Windows x64
    • Authors: Yuto Otsuki; Yuhei Kawakoya; Makoto Iwamura; Jun Miyoshi; Kazuhiko Ohkubo
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Yuto Otsuki, Yuhei Kawakoya, Makoto Iwamura, Jun Miyoshi, Kazuhiko Ohkubo
      Stack traces play an important role in memory forensics as well as program debugging. This is because stack traces provide a history of executed code in a malware-infected host and this history could become a clue for forensic analysts to uncover the cause of an incident, i.e., what malware have actually done on the host. Nevertheless, existing research and tools for building stack traces for memory forensics are not well designed for the x64 environments, even though they have already become the most popular environment. In this paper, we introduce the design and implementation of our method for building stack traces from a memory dump of the Windows x64 environment. To build a stack trace, we retrieve a user context of the target thread from a memory dump for determining the start point of a stack trace, and then emulate stack unwinding referencing the metadata for exceptional handling for building the call stack of the thread. Even if the metadata are unavailable, which often occurs in a case of malicious software, we manage to produce the equivalent data by scanning the stack with a flow-based verification method. In this paper, we discuss the evaluation of our method through comparing the stack traces built with it with those built with WinDbg to show the accuracy of our method. We also explain some case studies using real malware to show the practicability of our method.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.013
      Issue No: Vol. 24 (2018)
       
  • Anti-forensics in ext4: On secrecy and usability of timestamp-based data
           hiding
    • Authors: Thomas Göbel; Harald Baier
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Thomas Göbel, Harald Baier
      Ext4 is a popular file system used by Android and many Linux distributions. With its rising pervasiveness, anti-forensic techniques like data hiding may be used to conceal data. This paper analyzes the feasibility of using timestamps of the ext4 file system to hide data. First, we examine the usage, the structure and the capacity of the available timestamps with a special focus on their sub-second granularity. The results reveal that the nanoseconds part of the ext4 timestamps can be used to build a system with steganographic strength. Second, we devise an ext4 anti-forensic technique that offers secrecy of the hidden data and easy usability in a wide range of scenarios. We provide a set of requirements (e.g., indistinguishability of regular and tampered timestamps) and a proof-of-concept implementation that is able to conceal arbitrary data within the file system timestamps. The evaluation shows that our implementation satisfies our requirements and actually works in practice.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.014
      Issue No: Vol. 24 (2018)
       
  • A standardized corpus for SQLite database forensics
    • Authors: Sebastian Nemetz; Sven Schmitt; Felix Freiling
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement
      Author(s): Sebastian Nemetz, Sven Schmitt, Felix Freiling
      An increasing number of programs like browsers or smartphone apps are using SQLite3 databases to store application data. In many cases, such data is of high value during a forensic investigation. Therefore, various tools have been developed that claim to support rigorous forensic analysis of SQLite database files, claims that are not supported by appropriate evidence. We present a standardized corpus of SQLite files that can be used to evaluate and benchmark analysis methods and tools. The corpus contains databases which use special features of the SQLite file format or contain potential pitfalls to detect errors in forensic programs. We apply our corpus to a set of six available tools and evaluate their strengths and weaknesses. In particular, we show that none of these tools can reliably handle all corner cases of the SQLite3 format.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.01.015
      Issue No: Vol. 24 (2018)
       
  • Clearly conveying digital forensic results
    • Authors: Eoghan Casey
      Pages: 1 - 3
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24
      Author(s): Eoghan Casey


      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.03.001
      Issue No: Vol. 24 (2018)
       
  • Keystroke dynamics features for gender recognition
    • Authors: Ioannis Tsimperidis; Avi Arampatzis; Alexandros Karakos
      Pages: 4 - 10
      Abstract: Publication date: Available online 5 February 2018
      Source:Digital Investigation
      Author(s): Ioannis Tsimperidis, Avi Arampatzis, Alexandros Karakos
      This work attempts to recognize the gender of an unknown user with data derived only from keystroke dynamics. Keystroke dynamics, which can be described as the way a user is typing, usually amount to tens of thousands of features, each of them enclosing some information. The question that arises is which of these characteristics are most suitable for gender classification. To answer this question, a new dataset was created by recording users during the daily usage of their computer, the information gain of each keystroke dynamics feature was calculated, and five well-known classification models were used to test the feature sets. The results show that the gender of an unknown user can be identified with an accuracy of over 95% with only a few hundred features. This percentage, which is the highest found in the literature, is quite promising for the development of reliable systems that can alert an unsuspecting user to being a victim of deception. Moreover, having the ability to identify the gender of a user who types a certain piece of text is of significant importance in digital forensics. This holds true, as it could be the source of circumstantial evidence for “putting fingers on the keyboard” and for arbitrating cases where the true origin of a message needs to be identified.

      PubDate: 2018-02-25T15:48:53Z
      DOI: 10.1016/j.diin.2018.01.018
      Issue No: Vol. 24 (2018)
       
  • Smartphone data evaluation model: Identifying authentic smartphone data
    • Authors: Heloise Pieterse; Martin Olivier; Renier van Heerden
      Pages: 11 - 24
      Abstract: Publication date: Available online 14 February 2018
      Source:Digital Investigation
      Author(s): Heloise Pieterse, Martin Olivier, Renier van Heerden
      Ever improving smartphone technology, along with the widespread use of the devices to accomplish daily tasks, leads to the collection of rich sources of smartphone data. Smartphone data are, however, susceptible to change and can be altered intentionally or accidentally by end-users or installed applications. It becomes, therefore, important to establish the authenticity of smartphone data, confirming the data refer to actual events, before submitting the data as potential evidence. This paper focuses on data created by smartphone applications and the techniques that can be used to establish the authenticity of the data. To identify authentic smartphone data, a better understanding of the smartphone, related smartphone applications and the environment in which the smartphone operates are required. From the gathered knowledge and insight, requirements are identified that authentic smartphone data must adhere to. These requirements are captured in a new model to assist digital forensic professionals with the evaluation of smartphone data. Experiments, involving different smartphones, are conducted to determine the practicality of the new evaluation model with the identification of authentic smartphone data. The presented results provide preliminary evidence that the suggested model offers the necessary guidance to identify authentic smartphone data.

      PubDate: 2018-02-25T15:48:53Z
      DOI: 10.1016/j.diin.2018.01.017
      Issue No: Vol. 24 (2018)
       
  • An in-depth analysis of Android malware using hybrid techniques
    • Authors: Abdullah Talha Kabakus; Ibrahim Alper Dogru
      Pages: 25 - 33
      Abstract: Publication date: Available online 31 January 2018
      Source:Digital Investigation
      Author(s): Abdullah Talha Kabakus, Ibrahim Alper Dogru
      Android malware is widespread despite the effort provided by Google in order to prevent it from the official application market, Play Store. Two techniques namely static and dynamic analysis are commonly used to detect malicious applications in Android ecosystem. Both of these techniques have their own advantages and disadvantages. In this paper, we propose a novel hybrid Android malware analysis approach namely mad4a which uses the advantages of both static and dynamic analysis techniques. The aim of this study is revealing some unknown characteristics of Android malware through the used various analysis techniques. As the result of static and dynamic analysis on the widely used Android application datasets, digital investigators are informed about some underestimated characteristics of Android malware.

      PubDate: 2018-02-04T22:41:05Z
      DOI: 10.1016/j.diin.2018.01.001
      Issue No: Vol. 24 (2018)
       
  • Lempel-Ziv Jaccard Distance, an effective alternative to ssdeep and sdhash
    • Authors: Edward Raff; Charles Nicholas
      Pages: 34 - 49
      Abstract: Publication date: Available online 9 February 2018
      Source:Digital Investigation
      Author(s): Edward Raff, Charles Nicholas
      Recent work has proposed the Lempel-Ziv Jaccard Distance (LZJD) as a method to measure the similarity between binary byte sequences for malware classification. We propose and test LZJD's effectiveness as a similarity digest hash for digital forensics. To do so we develop a high performance Java implementation with the same command-line arguments as sdhash, making it easy to integrate into existing work-flows. Our testing shows that LZJD is effective for this task, and significantly outperforms sdhash and ssdeep in its ability to match related file fragments and files corrupted with random noise. In addition, LZJD is up to 60× faster than sdhash at comparison time.

      PubDate: 2018-02-25T15:48:53Z
      DOI: 10.1016/j.diin.2017.12.004
      Issue No: Vol. 24 (2018)
       
  • HDFS file operation fingerprints for forensic investigations
    • Authors: Mariam Khader; Ali Hadi; Ghazi Al-Naymat
      Pages: 50 - 61
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24
      Author(s): Mariam Khader, Ali Hadi, Ghazi Al-Naymat
      Understanding the Hadoop Distributed File System (HDFS) is currently an important issue for forensic investigators because it is the core of most Big Data environments. The HDFS requires more study to understand how forensic investigations should be performed and what artifacts can be extracted from this framework. The HDFS framework encompasses a large amount of data; thus, in most forensic analyses, it is not possible to gather all of the data, resulting in metadata and logs playing a vital role. In a good forensic analysis, metadata artifacts could be used to establish a timeline of events, highlight patterns of file-system operation, and point to gaps in the data. This paper provides metadata observations for HDFS operations based on fsimage and hdfs-audit logs. These observations draw a roadmap of metadata changes that aids in forensic investigations in an HDFS environment. Understanding metadata changes assists a forensic investigator in identifying what actions were performed on the HDFS. This study focuses on executing day-to-day (regular) file-system operations and recording which file metadata changes occur after each operation. Each operation was executed, and its fingerprints were detailed. The use of those fingerprints as artifacts for file-system forensic analysis was elaborated via two case studies. The results of the research include a detailed study of each operation, including which system entity (user or service) performed this operation and when, which is vital for most analysis cases. Moreover, the forensic value of examined observations is indicated by employing these artifacts in forensic analysis.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2017.11.004
      Issue No: Vol. 24 (2018)
       
  • Criminal motivation on the dark web: A categorisation model for law
           enforcement
    • Authors: Janis Dalins; Campbell Wilson; Mark Carman
      Pages: 62 - 71
      Abstract: Publication date: Available online 31 January 2018
      Source:Digital Investigation
      Author(s): Janis Dalins, Campbell Wilson, Mark Carman
      Research into the nature and structure of ‘Dark Webs’ such as Tor has largely focused upon manually labelling a series of crawled sites against a series of categories, sometimes using these labels as a training corpus for subsequent automated crawls. Such an approach is adequate for establishing broad taxonomies, but is of limited value for specialised tasks within the field of law enforcement. Contrastingly, existing research into illicit behaviour online has tended to focus upon particular crime types such as terrorism. A gap exists between taxonomies capable of holistic representation and those capable of detailing criminal behaviour. The absence of such a taxonomy limits interoperability between agencies, curtailing development of standardised classification tools. We introduce the Tor-use Motivation Model (TMM), a two-dimensional classification methodology specifically designed for use within a law enforcement context. The TMM achieves greater levels of granularity by explicitly distinguishing site content from motivation, providing a richer labelling schema without introducing inefficient complexity or reliance upon overly broad categories of relevance. We demonstrate this flexibility and robustness through direct examples, showing the TMM's ability to distinguish a range of unethical and illegal behaviour without bloating the model with unnecessary detail. The authors of this paper received permission from the Australian government to conduct an unrestricted crawl of Tor for research purposes, including the gathering and analysis of illegal materials such as child pornography. The crawl gathered 232,792 pages from 7651 Tor virtual domains, resulting in the collation of a wide spectrum of materials, from illicit to downright banal. Existing conceptual models and their labelling schemas were tested against a small sample of gathered data, and were observed to be either overly prescriptive or vague for law enforcement purposes - particularly when used for prioritising sites of interest for further investigation. In this paper we deploy the TMM by manually labelling a corpus of over 4000 unique Tor pages. We found a network impacted (but not dominated) by illicit commerce and money laundering, but almost completely devoid of violence and extremism. In short, criminality on this ‘dark web’ is based more upon greed and desire, rather than any particular political motivations.

      PubDate: 2018-02-04T22:41:05Z
      DOI: 10.1016/j.diin.2017.12.003
      Issue No: Vol. 24 (2018)
       
  • Alexa, did you get that' Determining the evidentiary value of data
           stored by the Amazon® Echo
    • Authors: Douglas A. Orr; Laura Sanchez
      Pages: 72 - 78
      Abstract: Publication date: Available online 1 February 2018
      Source:Digital Investigation
      Author(s): Douglas A. Orr, Laura Sanchez


      PubDate: 2018-02-04T22:41:05Z
      DOI: 10.1016/j.diin.2017.12.002
      Issue No: Vol. 24 (2018)
       
  • Following the breadcrumbs: Timestamp pattern identification for cloud
           forensics
    • Authors: Shuyuan Mary Ho; Dayu Kao; Wen-Ying Wu
      Pages: 79 - 94
      Abstract: Publication date: Available online 31 January 2018
      Source:Digital Investigation
      Author(s): Shuyuan Mary Ho, Dayu Kao, Wen-Ying Wu
      This study explores the challenges of digital forensics investigation in file access, transfer and operations, and identifies file operational and behavioral patterns based on timestamps—in both the standalone as well as interactions between Windows NTFS and Ubuntu Ext4 filesystems. File-based metadata is observed, and timestamps across different cloud access behavioral patterns are compared and validated. As critical metadata information cannot be easily observed, a rigorous iterative approach was implemented to extract hidden, critical file attributes and timestamps. Direct observation and cross-sectional analysis were adopted to analyze timestamps, and to differentiate between patterns based on different types of cloud access operations. Fundamental observation rules and characteristics of file interaction in the cloud environment are derived as behavioral patterns for cloud operations. This study contributes to cloud forensics investigation of data breach incidents where the crime clues, characteristics and evidence of the incidents are collected, identified and analyzed. The results demonstrate the effectiveness of pattern identification for digital forensics across various types of cloud access operations.

      PubDate: 2018-02-25T15:48:53Z
      DOI: 10.1016/j.diin.2017.12.001
      Issue No: Vol. 24 (2018)
       
  • Improving source camera identification performance using DCT based image
           frequency components dependent sensor pattern noise extraction method
    • Authors: Bhupendra Gupta; Mayank Tiwari
      Pages: 121 - 127
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24
      Author(s): Bhupendra Gupta, Mayank Tiwari
      Sensor imperfections in the form of photo response non-uniformity (PRNU) are widely used to perform various image forensic tasks such as source camera identification, image integrity verification, and device linking. The PRNU contains important information about the sensor in terms of frequency contents, this information makes it suitable for various image forensic applications. The main drawback of existing methods of PRNU extraction is that the extracted PRNU contains fine details of the image i.e., the high-frequency details (edges and texture). For solving this problem we have applied a pre-processing step on widely accepted PRNU extraction methods. Our pre-processing step is based on the fact that ‘PRNU is a very weak noise signal and hence it can be efficiently extracted from the image by applying PRNU extraction method in low frequency (LF) and high frequency (HF) components of the image separately’. Initially, we have applied this pre-processing concept to the widely accepted PRNU extraction methods and found that it is able to improve the performance of most of the PRNU extraction methods. The best improvement takes place for Mihcak filter. Hence in the remaining part of the work, this generalized concept is more precisely applied to the Mihcak filter only. By utilizing the proposed pre-processing idea with the Mihcak filer, the new filter is termed as the pMihcak filter. PRNU extracted using pMihcak filter contains the least amount of HF details of the image. Also, the pMihcak filter is able to extract PRNU from low-frequency components of the image which otherwise not possible for existing PRNU extractors.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.02.003
      Issue No: Vol. 24 (2018)
       
  • Efficiently searching target data traces in storage devices with region
           based random sector sampling approach
    • Authors: Nitesh K. Bharadwaj; Upasna Singh
      Pages: 128 - 141
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24
      Author(s): Nitesh K. Bharadwaj, Upasna Singh
      Today the pervasiveness and low-cost of storage disk drives have made digital forensics cumbersome, slow and exorbitant task. Since storage drives are the huge reservoir of digital evidence, examination of these devices requires an enormous amount of analysis time and computing resources. In order to efficiently examine large data volumes a random sector sampling method, subpart of forensic triage, has been utilized in literature to attain admissible investigation outcomes. Conventionally the random sampling method imposes the primary requirement of extensive seek and read requests. This paper presents a unique framework to efficiently utilize the sector hashing and random sampling method towards investigating the existence of target data traces, by independently exploiting the regions of the suspected storage drive. In literature, there is no specific work carried out towards the quantification of the number of random samples required to hit a desired target data traces in storage drives. Also, the standard percentage of random samples is analyzed and proposed, which might be necessary and sufficient to validate the existence of target data in the drive. Several experiments were devised to evaluate the method by considering storage media and target data of different capacities and sizes. It was observed that the size of the target data is an important factor in determining the percentage of sector samples i . e . , necessarily required for effectively examining the storage disk drives. In the view of the quantified percentage of random samples, finally, a case study is demonstrated to evaluate the adequacy of the derived metrics.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.02.004
      Issue No: Vol. 24 (2018)
       
  • Source camera identification using Photo Response Non-Uniformity on
           WhatsApp
    • Authors: Christiaan Meij; Zeno Geradts
      Pages: 142 - 154
      Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24
      Author(s): Christiaan Meij, Zeno Geradts
      The Photo Response Non-Uniformity pattern can be a method for identification for an individual camera and is often present in digital footage. Therefore, the PRNU-pattern is also called the fingerprint of the camera. This pattern can be extracted and used to identify the source camera with a high likelihood ratio. This can be useful in cases such as child abuse or child pornography. In this research a 2nd order (FSTV) based method is used to extract the PRNU-patterns from videos of ten different mobile phone cameras. By calculating the Peak to Correlation Energy the PRNU-patterns of the natural videos are compared to the PRNU-patterns of the reference flat field videos of each camera to identify the source camera. This has been done for the original videos and the transmitted videos by WhatsApp for Android and IOS to determine if source camera identification by using PRNU is possible when videos are transmitted by WhatsApp. Also the PRNU-patterns of the natural videos are compared to each other to determine the possibility to find out if videos originate from the same source. With most cameras tested the method provides a high likelihood ratio, however for each case a validation of the method is necessary with reference cameras of the same model and type if used in casework. With videos transmitted by the IOS version of Whatsapp the source camera identification was not possible anymore.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.02.005
      Issue No: Vol. 24 (2018)
       
  • Efficient monitoring and forensic analysis via accurate network-attached
           provenance collection with minimal storage overhead
    • Abstract: Publication date: Available online 8 May 2018
      Source:Digital Investigation
      Author(s): Yulai Xie, Dan Feng, Xuelong Liao, Leihua Qin
      Provenance, the history or lineage of an object, has been used to enable efficient forensic analysis in intrusion prevention system to detect intrusion, correlate anomaly, and reduce false alert. Especially for the network-attached environment, it is critical and necessary to accurately capture network context to trace back the intrusion source and identify the system vulnerability. However, most of the existing methods fail to collect accurate and complete network-attached provenance. In addition, how to enable efficient forensic analysis with minimal provenance storage overhead remains a big challenge. This paper proposes a provenance-based monitoring and forensic analysis framework called PDMS that builds upon existing provenance tracking framework. On one hand, it monitors and records every network session, and collects the dependency relationships between files, processes and network sockets. By carefully describing and collecting the network socket information, PDMS can accurately track the data flow in and out of the system. On the other hand, this framework unifies both efficient provenance filtering and query-friendly compression. Evaluation results show that this framework can make accurate and highly efficient forensic analysis with minimal provenance storage overhead.

      PubDate: 2018-05-15T13:55:30Z
       
  • TREDE and VMPOP: Cultivating multi-purpose datasets for digital forensics
           – A Windows registry corpus as an example
    • Abstract: Publication date: Available online 28 April 2018
      Source:Digital Investigation
      Author(s): Jungheum Park
      The demand is rising for publicly available datasets to support studying emerging technologies, performing tool testing, detecting incorrect implementations, and also ensuring the reliability of security and digital forensics related knowledge. While a variety of data is being created on a day-to-day basis in; security, forensics and incident response labs, the created data is often not practical to use or has other limitations. In this situation, a variety of researchers, practitioners and research projects have released valuable datasets acquired from computer systems or digital devices used by actual users or are generated during research activities. Nevertheless, there is still a significant lack of reference data for supporting a range of purposes, and there is also a need to increase the number of publicly available testbeds as well as to improve verifiability as ‘reference’ data. Although existing datasets are useful and valuable, some of them have critical limitations on the verifiability if they are acquired or created without ground truth data. This paper introduces a practical methodology to develop synthetic reference datasets in the field of security and digital forensics. This work's proposal divides the steps for generating a synthetic corpus into two different classes: user-generated and system-generated reference data. In addition, this paper presents a novel framework to assist the development of system-generated data along with a virtualization system and elaborate automated virtual machine control, and then proceeds to perform a proof-of-concept implementation. Finally, this work demonstrates that the proposed concepts are feasible and effective through practical deployment and then evaluate its potential values.

      PubDate: 2018-05-15T13:55:30Z
       
  • Accrediting digital forensics: What are the choices'
    • Abstract: Publication date: Available online 25 April 2018
      Source:Digital Investigation
      Author(s): Peter Sommer
      There are three apparent competing routes to providing re-assurance about the quality of digital forensics work: accredit the individual expert, accredit the laboratory and its processes, let the courts test via its procedures. The strengths and weaknesses of each are discussed against the variety of activities within “forensic science”. The particular problems of digital forensics, including its complexity and rate of change, are reviewed. It is argued that formal standards may not always be practical or value for money compared with advisory good practice guides.

      PubDate: 2018-05-15T13:55:30Z
       
  • WhatsApp server-side media persistence
    • Abstract: Publication date: Available online 25 April 2018
      Source:Digital Investigation
      Author(s): Angus M. Marshall


      PubDate: 2018-05-15T13:55:30Z
       
  • Forensics study of IMO call and chat app
    • Abstract: Publication date: Available online 25 April 2018
      Source:Digital Investigation
      Author(s): M.A.K. Sudozai, Shahzad Saleem, William J. Buchanan, Nisar Habib, Haleemah Zia
      Smart phones often leave behind a wealth of information that can be used as an evidence during an investigation. There are thus many smart phone applications that employ encryption to store and/or transmit data, and this can add a layer of complexity for an investigator. IMO is a popular application which employs encryption for both call and chat activities. This paper explores important artifacts from both the device and from the network traffic. This was generated for both Android and iOS platforms. The novel aspect of the work is the extensive analysis of encrypted network traffic generated by IMO. Along with this the paper defines a new method of using a firewall to explore the obscured options of connectivity, and in a way which is independent of the protocol used by the IMO client and server. Our results outline that we can correctly detect IMO traffic flows and classify different events of its chat and call related activities. We have also compared IMO network traffic of Android and iOS platforms to report the subtle differences. The results are valid for IMO 9.8.00 on Android and 7.0.55 on iOS.

      PubDate: 2018-05-15T13:55:30Z
       
  • An analytical analysis of Turkish digital forensics
    • Authors: Mesut Ozel; H. Ibrahim Bulbul; H. Guclu Yavuzcan; Omer Faruk Bay
      Abstract: Publication date: Available online 17 April 2018
      Source:Digital Investigation
      Author(s): Mesut Ozel, H. Ibrahim Bulbul, H. Guclu Yavuzcan, Omer Faruk Bay
      The first glimpses of digital forensics (DF) starts back in 1970's, mainly financial frauds, with the widespread use of computers. The evolution of information technologies and their wider use made the digital forensics evolve and flourish. Digital forensics passed a short but complex way of “Ad-Hoc”, “Structured” and “Enterprise” phases nearly in four decades. The national readiness of countries might vary for those phases depending on the economy, legislation, adoption level, expertise and other factors. Today digital forensics discipline is one of the major issues of law enforcement (LE), government, defense, industry, academics, justice and other non-governmental organizations as stakeholders have to deal with. We wanted to assess the maturity level of “Turkish Digital Forensics” in view of the digital forensics historical phases, along with some specific institutional & organizational digital forensics issues. The current digital forensic capacity and ability, understanding and adoption level of the discipline, education and training forecasts, current organizational digital forensics framework and infrastructure, expertise, certification and knowledge gained/needed by digital forensics community, tools and SW-HW used in digital forensics, national legislation, policy making and standardization issues along with the anticipated requirements for near future are aimed to address by an online survey. This paper discusses the aforementioned national issues with respect to the digital forensics discipline. It does not examine all aspects of digital forensics. The general assessment we had reached for the maturity level of “National DF” is in between the structured and enterprise phases, with a long way to go but with promising developments.

      PubDate: 2018-04-25T08:46:53Z
      DOI: 10.1016/j.diin.2018.04.001
       
  • Automatic categorization of Arabic articles based on their political
           orientation
    • Authors: Raddad Abooraig; Shadi Al-Zu'bi; Tarek Kanan; Bilal Hawashin; Mahmoud Al Ayoub; Ismail Hmeidi
      Abstract: Publication date: Available online 12 April 2018
      Source:Digital Investigation
      Author(s): Raddad Abooraig, Shadi Al-Zu'bi, Tarek Kanan, Bilal Hawashin, Mahmoud Al Ayoub, Ismail Hmeidi
      The ability to automatically determine the political orientation of an article can be of great benefit in many areas from academia to security. However, this problem has been largely understudied for Arabic texts in the literature. The contribution of this work lies in two aspects. First, collecting and manually labeling a corpus of articles and comments from different political orientations in the Arab world and making different versions of it. Second, studying the performance of various feature reduction methods and various classifiers on these synthesized datasets. The two most popular feature extraction approaches for such a problem were compared, namely the Traditional Text Categorization (TC) approach and the Stylometric Features approach (SF). Although the experimental results show the superiority of the TC approach over the SF approach, the results also indicate that the latter approach can be significantly improved by adding new and more discriminating features. The experimental results also show that the feature selection techniques reduce the accuracies of the considered classifiers under the TC and SF approaches in general. The only exception is the Partition Membership (PM) technique which has an opposite effect. The highest accuracies are obtained when PM feature selection method is used with the Support Vector Machine (SVM) classifier.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.04.003
       
  • Speaker verification from codec distorted speech for forensic
           investigation through serial combination of classifiers
    • Authors: M.S. Athulya; P.S. Sathidevi
      Abstract: Publication date: Available online 31 March 2018
      Source:Digital Investigation
      Author(s): M.S. Athulya, P.S. Sathidevi
      Forensic investigation often uses biometric evidence as important aids for identifying the culprits. Speech is one of the easily available biometrics in today's hi-tech world. But, most of the speech biometric evidence acquired for investigative purposes will usually be highly distorted. Among these distortions, most prominent is the distortion introduced by the speech codec. Speech codec may either remove or distort some of the speaker-specific features, and this may reduce the speaker verification accuracy. The effect of distortion on commonly used speaker-specific features namely Mel Frequency Cepstral Coefficients (MFCC) and Power Normalized Cepstral Coefficients (PNCC), due to Code Excited Linear Prediction (CELP) codec (the most widely used speech codec in today's mobile telephony), is quantified in this paper. The features which are least affected by the codec are experimentally determined as PNCC. But, when these PNCC coefficients are directly employed, speaker verification error rate obtained is 20% with Gaussian Mixture Model-Universal Background Model (GMM-UBM) classifier. To improve the verification accuracy, PNCCs are slightly modified, and these modified PNCCs (MPNCC) are used as the feature set for the speaker verification. With these modified PNCCs, the error rate is reduced to 15%. By fusing these MPNCCs with MFCC, the error rate is further reduced to 8.75%. A series combination of GMM-UBM and Support Vector Machine (SVM) classifiers is also proposed here to enhance the speaker verification accuracy further. The speaker verification error rates for different baseline classifiers are compared with that of the proposed serially combined GMM-UBM and SVM classifiers. The classifier fusion with the fused feature set largely reduced the error rates to 2.5% which is very much less than that of baseline classifiers with normal PNCC features. Hence, this system is a good candidate for investigative purposes.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.03.005
       
  • Navigating the Windows Mail database
    • Authors: Howard Chivers
      Abstract: Publication date: Available online 21 March 2018
      Source:Digital Investigation
      Author(s): Howard Chivers
      The Windows Mail application in Windows 10 uses an ESE database to store messages, appointments and related data; however, field (column) names used to identify these records are hexadecimal property tags, many of which are undocumented. To support forensic analysis a series of experiments were carried out to diagnose the function of these tags, and this work resulted in a body of related information about the Mail application. This paper documents property tags that have been diagnosed, and presents how Windows Mail artifacts recovered from the ESE store.vol database can be interpreted, including how the paths of file recorded by the Mail system are derived from database records. We also present example emails and appointment records that illustrate forensic issues in the interpretation of message and appointment records, and show how additional information can be obtained by associating these records with other information in the ESE database.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.02.001
       
  • I know what you streamed last night: On the security and privacy of
           streaming
    • Authors: Alexios Nikas; Efthimios Alepis; Constantinos Patsakis
      Abstract: Publication date: Available online 21 March 2018
      Source:Digital Investigation
      Author(s): Alexios Nikas, Efthimios Alepis, Constantinos Patsakis
      Streaming media are currently conquering traditional multimedia by means of services like Netflix, Amazon Prime and Hulu which provide to millions of users worldwide with paid subscriptions in order to watch the desired content on-demand. Simultaneously, numerous applications and services infringing this content by sharing it for free have emerged. The latter has given ground to a new market based on illegal downloads which monetizes from ads and custom hardware, often aggregating peers to maximize multimedia content sharing. Regardless of the ethical and legal issues involved, the users of such streaming services are millions and they are severely exposed to various threats, mainly due to poor hardware and software configurations. Recent attacks have also shown that they may, in turn, endanger others as well. This work details these threats and presents new attacks on these systems as well as forensic evidence that can be collected in specific cases.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.03.004
       
  • Dismantling OpenPuff PDF steganography
    • Authors: Thomas Sloan; Julio Hernandez-Castro
      Abstract: Publication date: Available online 20 March 2018
      Source:Digital Investigation
      Author(s): Thomas Sloan, Julio Hernandez-Castro
      We present in this paper a steganalytic attack against the PDF component of the popular OpenPuff tool. We show that our findings allow us to accurately detect the presence of OpenPuff steganography over the PDF format with the use of a simple script. OpenPuff is a prominent multi-format and semi-open-source stego-system with a large user base. Because of its popularity, we think our results could potentially have relevant security implications. The relative simplicity of our attack, paired with its high accuracy and the existence of previous steganalytic findings against this software warrants major concerns over the real security offered by this steganography tool.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.03.003
       
  • Detecting fake iris in iris bio-metric system
    • Authors: Vijay Kumar Sinha; Anuj Kumar Gupta; Manish Mahajan
      Abstract: Publication date: Available online 20 March 2018
      Source:Digital Investigation
      Author(s): Vijay Kumar Sinha, Anuj Kumar Gupta, Manish Mahajan
      Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the irises of an individual's eyes, whose complex random patterns are unique and can be seen from some distance. Now days, Iris is being used widely by several organizations, including governments, for identification and authentication purposes. Aadhar, India's UID project uses Iris scan along with fingerprints to uniquely identify people and allocate a Unique Identification Number. Most of the work done in the area of Iris pattern recognition systems emphasizes only on matching of the patterns with the stored templates. Security aspects of the system are still unexplored. The available security algorithms provide only some cryptographic solutions that keeps the template database in a secret cryptographic form. We successfully enhanced the detection of fake iris images and add the provision of detection of false of scanned iris images as template. This enhanced significantly the performance of the system in terms of security and reliability. We use Flash and motion detection of natural eye to detect the liveliness of real iris images before matching from stored templates.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.03.002
       
  • I didn't see that! An examination of internet browser cache behaviour
           following website visits
    • Authors: Graeme Horsman
      Abstract: Publication date: Available online 2 March 2018
      Source:Digital Investigation
      Author(s): Graeme Horsman
      By default, all major web browsing applications cache visited website content to the local disk to improve browser efficiency and enhance user experience. As a result of this action, the cache provides a window of opportunity for the digital forensic practitioner to establish the nature of the content which was hosted on the websites which had been visited. Cache content is often evidential during cases surrounding Indecent Images of Children (IIoC) where it is often assumed that cached IIoC is a record of the content viewed by a defendant via their browser. However, this may not always be the case. This article investigates web browser cache behaviour in an attempt to identify whether it is possible to definitively establish what quantity of cached content was viewable by a user following a visit to a website. Both the Mozilla Firefox and Google Chrome browser caches are analysed following visits to 10 test websites in order to quantify cache behaviour. Results indicate that the volume of locally cached content differs between both web browsers and websites visited, with instances of images cached which would not have been viewable by the user upon landing on a website. Further, the number of cached images appears to be effected by how much of a website a user scrolls through.

      PubDate: 2018-04-15T20:25:24Z
      DOI: 10.1016/j.diin.2018.02.006
       
  • Prelim i - Editorial Board
    • Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24


      PubDate: 2018-04-15T20:25:24Z
       
  • Prelim i - Editorial Board
    • Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement


      PubDate: 2018-04-15T20:25:24Z
       
  • Prelim iii - Contents List
    • Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24


      PubDate: 2018-04-15T20:25:24Z
       
  • Prelim iii - Contents List
    • Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement


      PubDate: 2018-04-15T20:25:24Z
       
  • The Proceedings of the Fifth Annual DFRWS Europe Conference
    • Abstract: Publication date: March 2018
      Source:Digital Investigation, Volume 24, Supplement


      PubDate: 2018-04-15T20:25:24Z
       
  • Editorial - A smörgåsbord of digital evidence
    • Authors: Eoghan Casey
      Pages: 1 - 2
      Abstract: Publication date: December 2017
      Source:Digital Investigation, Volume 23
      Author(s): Eoghan Casey


      PubDate: 2017-12-26T18:26:02Z
      DOI: 10.1016/j.diin.2017.11.003
      Issue No: Vol. 23 (2017)
       
  • Prelim i - Editorial Board
    • Abstract: Publication date: December 2017
      Source:Digital Investigation, Volume 23


      PubDate: 2017-12-26T18:26:02Z
       
  • Prelim iii - Contents List
    • Abstract: Publication date: December 2017
      Source:Digital Investigation, Volume 23


      PubDate: 2017-12-26T18:26:02Z
       
  • Investigation of Indecent Images of Children cases: Challenges and
           suggestions collected from the trenches
    • Authors: Virginia N.L.; Franqueira Joanne Bryce Noora Mutawa Andrew Marrington
      Abstract: Publication date: Available online 2 December 2017
      Source:Digital Investigation
      Author(s): Virginia N.L. Franqueira, Joanne Bryce, Noora Al Mutawa, Andrew Marrington
      Previous studies examining the investigative challenges and needs of Digital Forensic (DF) practitioners have typically taken a sector-wide focus. This paper presents the results of a survey which collected text-rich comments about the challenges experienced and related suggestions for improvement in the investigation of Indecent Images of Children (IIOC) cases. The comments were provided by 153 international DF practitioners (28.1% survey response rate) and were processed using Thematic Analysis. This resulted in the identification of 4 IIOC-specific challenge themes, and 6 DF-generic challenges which directly affect IIOC. The paper discusses these identified challenges from a practitioner perspective, and outlines their suggestions for addressing them.

      PubDate: 2017-12-12T22:56:33Z
       
  • A method and tool to recover data deleted from a MongoDB
    • Authors: Jongseong Yoon; Sangjin Lee
      Abstract: Publication date: Available online 21 November 2017
      Source:Digital Investigation
      Author(s): Jongseong Yoon, Sangjin Lee
      DBMS stores an important data, which is one of the important analytical subjects for analysis in digital forensics. The technique of recovering deleted data from the DBMS plays an important role in finding the evidence in forensic investigation cases. Although relational DBMS is used as important data storage until now, NoSQL DBMSs is used more often due to the growing pursue of Big Data. This increases the potential to analyze a NoSQL DMBS in forensic cases. In reality, data from approximately 26,000 servers has been deleted by a massive ransom attack on vulnerable MongoDB server. Therefore, investigation of internal structure analysis and deleted data recovery techniques of NoSQL DBMS is essential. In this paper, we research the recovery method on deleted data in MongoDB that is widely used. We have analyzed the internal structures of the WiredTiger and MMAPv1 storage engines, which are the MongoDB's disk-based storage engines. Moreover, we have implemented the recovery algorithm as a tool as well as have evaluated its performance on real and self-generated experiment data.

      PubDate: 2017-12-12T22:56:33Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.224.108.85
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-