Journal Cover
Digital Investigation
Journal Prestige (SJR): 0.635
Citation Impact (citeScore): 3
Number of Followers: 510  
  Full-text available via subscription Subscription journal
ISSN (Print) 1742-2876
Published by Elsevier Homepage  [3161 journals]
  • The darknet's smaller than we thought: The life cycle of Tor Hidden
    • Abstract: Publication date: Available online 22 September 2018Source: Digital InvestigationAuthor(s): Gareth Owenson, Sarah Cortes, Andrew LewmanAbstractThe Tor Darknet is a pseudo-anonymous place to host content online frequently used by criminals to sell narcotics and to distribute illicit material. Many studies have attempted to estimate the size of the darknet, but this paper will show that previous estimates on size are inaccurate due to hidden service lifecycle. The first examination of its kind will be presented on the differences between short-lived and long-lived hidden services. Finally, in light of a new Tor protocol for the darknet which will prevent the running of relays to learning darknet sites, an analysis is presented of the use of crawling and whether this is an effective mechanism to discover sites for law enforcement.
  • Requirements in digital forensics method definition: Observations from a
           UK study
    • Abstract: Publication date: Available online 20 September 2018Source: Digital InvestigationAuthor(s): Angus M. Marshall, Richard PaigeAbstractDuring a project to examine the potential usefulness of evidence of tool verification as part of method validation for ISO 17025 accreditation, the authors have examined requirements statements in several digital forensic method descriptions and tools. They have identified that there is an absence of clear requirements statements in the methods and a reluctance or inability to disclose requirements on the part of tool producers. This leads to a break in evidence of correctness for both tools and methods, resulting in incomplete validation. They compare the digital forensics situation with other ISO 17025 accredited organisations, both forensic and non-forensic, and propose a means to close the gap and improve validation. They also review existing projects which may assist with their proposed solution.
  • The need for translational research in digital investigation
    • Abstract: Publication date: September 2018Source: Digital Investigation, Volume 26Author(s): Eoghan Casey
  • Prelim iii - Contents List
    • Abstract: Publication date: September 2018Source: Digital Investigation, Volume 26Author(s):
  • Prelim i - Editorial Board
    • Abstract: Publication date: September 2018Source: Digital Investigation, Volume 26Author(s):
  • A Jungle Computing approach to common image source identification in large
           collections of images
    • Abstract: Publication date: Available online 18 September 2018Source: Digital InvestigationAuthor(s): B. van Werkhoven, P. Hijma, C.J.H. Jacobs, J. Maassen, Z.J.M.H. Geradts, H.E. BalAbstractAnalyzing digital images is an important investigation in forensics with the ever increasing number of images from computers and smartphones. In this article we aim to advance the state-of-the-art in common image source identification (which images originate from the same source camera). To this end, we present two types of applications for different goals that make use of a) a modern Desktop computer with a GPU and b) highly heterogeneous cluster computers with many different kinds of GPUs, something we call computing jungles. The first application targets medium-scale investigations, for example within a crime laboratory, the second application is targeted at large-scale investigations, for example within institutions. We advance the state-of-the-art by 1) explaining in detail how we obtain the performance to 2) support large databases of images in reasonable time while 3) not giving up accuracy. Moreover, we do not apply filtering ensuring that 4) our results are highly reproducible.
  • Forensic analysis of B-tree file system (Btrfs)
    • Abstract: Publication date: Available online 14 September 2018Source: Digital InvestigationAuthor(s): Wasim Ahmad Bhat, Mohamad Ahtisham WaniAbstractThis paper identifies forensically important artifacts of B-tree file system (Btrfs), analyses changes that they incur due to node-balancing during file and directory operations, and based on the observed file system state-change proposes an evidence-extraction procedure. The findings suggested that retrieving forensic evidence in a fresh B-tree file system is difficult, the probability of evidence-extraction increases as the file system ages, internal nodes are the richest sources of forensic data, degree of evidence-extraction depends upon whether nodes are merged or redistributed, files with size less than 1 KB and greater than 4 KB have highest chances of recovery, and files with size 3–4 KB have least chances of recovery.
  • Median filter detection through streak area analysis
    • Abstract: Publication date: Available online 21 August 2018Source: Digital InvestigationAuthor(s): Sajjad Ahmed, Saiful IslamAbstractMedian filter (MF) is a content preserving nonlinear filter, employed to hide traces of image manipulations, affecting the reliability of manipulation detection techniques. Thus, median filter detection is a major concern for digital image forensics (DIF) experts. The methods used for median filter detection (MFD) are computationally expensive as high dimensionality of feature vectors are employed. This paper proposes an effective method for blind median filter detection based on streaking effect of the median filter. The method offered in the paper is built on experimental observation that the percentage streak area (psa) of an image increases on repetitive median filtering of the image, the rate at which psa increases for median filtered images is different from the rate at which psa increases for unfiltered images. A feature vector based on the observation is extracted from three different image datasets UCID, BOSS and Dresden and feed to Support Vector Machine (SVM) to perform 10-fold cross validation using linear kernel. The results obtained, using a three-dimensional feature vector, demonstrates efficacy of the proposed method.
  • Identification of the Source Camera of Images Based on Convolutional
           Neural Network
    • Abstract: Publication date: Available online 7 August 2018Source: Digital InvestigationAuthor(s): Na Huang, Jingsha He, Nafei Zhu, Xinggang Xuan, Gongzheng Liu, Chengyue ChangAbstractIdentification of the source of images is an indispensable part of digital forensics that involve images. Although the correlation between images and imaging devices could be characterized using some features and would thus make it possible to identify the sources of images, the efficiency of extracting features as well as estimating and matching of the pattern of the features in current methods are less than satisfactory and could be improved by automatic features extraction. This paper proposes a method for identifying the source camera of digital images based on convolutional neuron network. The method replies on designing a new network that involves an input layer, three convolutional layers with max pooling and normalization, two fully-connected layers and the Softmax classifier to test the task of identifying the source camera following the procedure of digital forensics. The original images are cropped into small-sized patches that the designed network would analyze, lowering the requirement of the network to use a large quantity of sample images from the targeting camera as the training data. A local-to-global strategy is also adopted that would respect the rule of majority voting of the image patches in determining the source camera. Testing results show that the proposed method can achieve accuracy of up to 99.8%, which assures the effectiveness of the majority voting. In addition, we can train an SVM classifier with the deep convolutional features of images that are extracted from the network that could achieve testing performance that is even better than Softmax.
  • Eighteenth Annual DFRWS Conference
    • Abstract: Publication date: July 2018Source: Digital Investigation, Volume 26, SupplementAuthor(s):
  • Prelim iii - Contents List
    • Abstract: Publication date: July 2018Source: Digital Investigation, Volume 26, SupplementAuthor(s):
  • Prelim i - Editorial Board
    • Abstract: Publication date: July 2018Source: Digital Investigation, Volume 26, SupplementAuthor(s):
  • Leveraging relocations in ELF-binaries for linux kernel version
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Manish Bhatt, Irfan AhmedAbstractIdentification of operating system kernel version is essential in a large number of forensic and security applications in both cloud and local environments. Prior state-of-the-art uses complex differential analysis of several aspects of kernel implementation and knowledge of kernel data structures. In this paper, we present a working research prototype codeid-elf for ELF binaries based on its Windows counterpart codeid, which can identify kernels through relocation entries extracted from the binaries. We show that relocation-based signatures are unique and distinct and thus, can be used to accurately determine Linux kernel versions and derandomize the base address of the kernel in memory (when kernel Address Space Layout Randomization is enabled). We evaluate the effectiveness of codeid-elf on a subset of Linux kernels and find that the relocations in kernel code have nearly 100% code coverage and low similarity (uniqueness) across various kernels. Finally, we show that codeid-elf, which leverages relocations in kernel code, can detect all kernel versions in the test set with almost 100% page hit rate and nearly zero false negatives.
  • On the viability of data collection using Google Rapid Response for
           enterprise-level malware research
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Rayan Mosli, Bo Yuan, Yin PanAbstractWith the increasing number of attacks on enterprises, which often involves the deployment of some form of malware, an automated method for performing large-scale triage has become essential to the timely resolution of an incident. The purpose of this project is to combine the data collection capabilities of Google Rapid Response (GRR) with the flexible automation of Cuckoo Sandbox, to collect data for training machine learning models that perform triage on enterprise machines. To evaluate the viability of this approach, we investigate the artifacts that can be collected using GRR and whether they provide salient features for triage automation. Furthermore, we consider the speed of data collection and the consistency of the collected data when scaling the analysis environment to include more machines. Moreover, we develop multiple simulations of benign computer usage for both generating the benign dataset and as real-world background activities when injecting malware samples. Examples of the simulations include surfing the web, using a word editor, and python coding using an IDE. We investigated a total of 39 Windows artifacts that can be remotely collected using GRR's StartFlowAndWait API. StartFlowAndWait blocks execution until the artifacts are collected or until an error message is received. Collecting all 39 artifacts required over 1 h on a dedicated network connection between the analysis VM and the GRR server. However, handpicking only 11 artifacts reduces the average data collection time to 4 min. We also found that increasing the number of analysis machines caused less artifacts to be successfully collected. This drop in reliability is due to network congestion and the waste of other computing resources from the blocking mechanism of StartFlowAndWait. Although GRR is designed for large-scale deployment, we found that the default configuration of GRR is not sufficient for malware research data collection when using StartFlowAndWait instead of StartFlow.
  • Facial-Forensic Analysis Tool
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Hiba Al-Kawaz, Nathan Clarke, Steven Furnell, Fudong LiAbstractFacial recognition has played an essential role in digital forensics due to the widespread use of digital technology such as CCTV, mobile phones, and digital cameras. Therefore, the growing volume of multimedia files (photos and videos), in particular, are a valuable source of evidence and the ability to identify culprits’ is invaluable. Despite significant efforts that have been given to this area, facial recognition suffers from several drawbacks in achieving recognition. These reasons are caused by photos conditions issues such as bad illumination, facial orientation, facial expression, photo quality, accessories (e.g., hat, glasses), and aging. The Facial-Forensic Analysis Tool (F-FAT) provides a technique that aids the forensic investigation in terms of the automatic facial recognition. It is a holistic system that is developed to collect, exam, and analyse multimedia evidence (photos and videos) using a multi-algorithmic fusion approach to overcome the weaknesses in individual algorithms and achieve a better accuracy for identification. The proposed approach also helps to reduce the cognitive load placed upon the investigator by providing a variety of forensic analyses such as, geo-location, facial modification, and social networks, to enable quicker answers to queries. This tool has also been designed based upon a case management concept that helps to manage the overall system, provide robust authentication, authorization and chain of custody.
  • Experience constructing the Artifact Genome Project (AGP): Managing the
           domain's knowledge one artifact at a time
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Cinthya Grajeda, Laura Sanchez, Ibrahim Baggili, Devon Clark, Frank BreitingerAbstractWhile various tools have been created to assist the digital forensics community with acquiring, processing, and organizing evidence and indicating the existence of artifacts, very few attempts have been made to establish a centralized system for archiving artifacts. The Artifact Genome Project (AGP) has aimed to create the largest vetted and freely available digital forensics repository for Curated Forensic Artifacts (CuFAs). This paper details the experience of building, implementing, and maintaining such a system by sharing design decisions, lessons learned, and future work. We also discuss the impact of AGP in both the professional and academic realms of digital forensics. Our work shows promise in the digital forensics academic community to champion the effort in curating digital forensic artifacts by integrating AGP into courses, research endeavors, and collaborative projects.
  • Memory forensics and the Windows Subsystem for Linux
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Nathan Lewis, Andrew Case, Aisha Ali-Gombe, Golden G. RichardAbstractThe Windows Subsystem for Linux (WSL) was first included in the Anniversary Update of Microsoft's Windows 10 operating system and supports execution of native Linux applications within the host operating system. This integrated support of Linux executables in a Windows environment presents challenges to existing memory forensics frameworks, such as Volatility, that are designed to only support one operating system type per analysis task (e.g., execution of a single framework plugin). WSL breaks this analysis model as Linux forensic artifacts, such as ELF executables, are active in a sample of physical memory from a system running Windows. Furthermore, WSL integrates Linux-specific data structures into existing Windows data structures, such as those used to track per-process metadata as well as userland runtime data. This integration results in existing analysis plugins producing inconsistent results when analyzing native Windows processes compared to WSL processes. Further complicating this situation is the fact that much of the WSL subsystem internals are completely undocumented. To remedy the current deficiencies related to WSL analysis, a research effort was undertaken to understand which existing Volatility plugins are affected by the introduction of WSL as well as what updates are necessary to fully support memory forensics of WSL. This paper describes these efforts, including our study of the operating systems data structures relevant to WSL as well as the development of new Volatility analysis plugins.
  • Digital forensic investigation of two-way radio communication equipment
           and services
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Arie Kouwen, Mark Scanlon, Kim-Kwang Raymond Choo, Nhien-An Le-KhacAbstractHistorically, radio-equipment has solely been used as a two-way analogue communication device. Today, the use of radio communication equipment is increasing by numerous organisations and businesses. The functionality of these traditionally short-range devices have expanded to include private call, address book, call-logs, text messages, lone worker, telemetry, data communication, and GPS. Many of these devices also integrate with smartphones, which delivers Push-To-Talk services that make it possible to setup connections between users using a two-way radio and a smartphone. In fact, these devices can be used to connect users only using smartphones. To date, there is little research on the digital traces in modern radio communication equipment. In fact, increasing the knowledge base about these radio communication devices and services can be valuable to law enforcement in a police investigation. In this paper, we investigate what kind of radio communication equipment and services law enforcement digital investigators can encounter at a crime scene or in an investigation. Subsequent to seizure of this radio communication equipment we explore the traces, which may have a forensic interest and how these traces can be acquired. Finally, we test our approach on sample radio communication equipment and services.
  • Resurrecting Portable Network Graphics using block generators
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Martin Lambertz, Jan-Niclas Hilgert, Roman Schell
  • Reconstructing ADS data hiding in windows NTFS: A temporal analysis
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Da-Yu Kao, Yuan-Pei Chen, Neng-Hsin ShihAbstractThe Windows NTFS file system supports for alternate data streams (ADS) to provide compatibility with files in the Macintosh file system. ADS can be used for hidden channels of storing and exchanging information on machines without altering their original functionality or contents. Executables in ADS can be executed from the command line. It is common for attackers to hide malware in cover media (files or folders) by ADS creation, modification or overwriting. The temporal information is significant when the computer is on. The attributes of $SI and $FN in the Master File Table (MFT) hold the following four forensically interesting EMAC-time stamps. Timestamp dynamics refers to any influence that adds, changes, obscures, contaminates, or obliterates timestamps, regardless of intent. Getting precise information about the file metadata in the MFT is important to the assessment of the scenario of the offense.The study of file metadata and ADS manipulation assists in establishing timestamp patterns and correlating activities from timestamp evidence. Some experimental processes were conducted to identify EMAC-time stamps in $SI and $FN, collect experimental observations in MFT, examine hidden channels, analyze timeline scenario, and present artifacts and non-artifacts to reconstruct the incident. This study explores the temporal analysis facing the law enforcement community and discusses the application of Forensic Toolkit (FTK) software to copy with the increasingly ADS feature in digital forensic investigations. This study also establishes some timestamp rules on ADS manipulation, enhances the performance of investigations, and helps investigators reconstruct an incident. It is beneficial for investigators to evaluate an accident if an attacker has manipulated ADS to conceal his offense.
  • CGC monitor: A vetting system for the DARPA cyber grand challenge
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Michael F. Thompson, Timothy VidasAbstractThe DARPA Cyber Grand Challenge (CGC) pit autonomous machines against one another in a battle to discover, mitigate, and take advantage of software vulnerabilities. The competitors repeatedly formulated and submitted binary software for execution against opponents, and to mitigate attacks mounted by opponents. The US Government sought confidence that competitors legitimately won their rewards (a prize pool of up to $6.75 million USD), and competitors deserved evidence that all parties operated in accordance with the rules, which prohibited attempts to subvert the competition infrastructure. To support those goals, we developed an analysis system to vet competitor software submissions destined for execution on the competition infrastructure, the classic situation of running untrusted software.In this work, we describe the design and implementation of this vetting system, as well as results gathered in deployment of the system as part of the CGC competition. The analysis system is implemented upon a high-fidelity full-system simulator requiring no modifications to the monitored operating system. We used this system to vet software submitted during the CGC Qualifying Event, and the CGC Final Event. The overwhelming majority of the vetting occurred in an automated fashion, with the system automatically monitoring the full x86-based system to detection corruption of operating system execution paths and data structures. However, the vetting system also facilitates investigation of any execution deemed suspicious by the automated process (or indeed any analysis required to answer queries related to the competition). An analyst may replay any software interaction using an IDA Pro plug-in, which utilizes the IDA debugger client to execute the session in reverse.In post-mortem analysis, we found no evidence of attempted infrastructure subversion and further conclude that of the 20 vulnerable software services exploited in the CGC Final Event, half were exploited in ways unintended by the service authors. Six services were exploited due to vulnerabilities accidentally included by the authors, while an additional four were exploited via the author-intended vulnerability, but via an unanticipated path.
  • Welcome pwn: Almond smart home hub forensics
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Akshay Awasthi, Huw O.L. Read, Konstantinos Xynos, Iain SutherlandAbstractMany home interactive sensors and networked devices are being branded as “Internet of Things” or IoT devices. Such disparate gadgets often have little in common other than that they all communicate using similar protocols. The emergence of devices known as “smart home hubs” allow for such hardware to be controlled by non-technical users providing inexpensive home security and other home automation functions. To the cyber analyst, these smart environments can be a boon to digital forensics; information such as interactions with the devices, sensors registering motion, temperature or moisture levels in different rooms, all tend to be collected in one central location rather than separate ones. This paper presents the research work conducted on one such smart home hub environment, the Securifi Almond+, and provides guidance for forensic data acquisition and analysis of artefacts pertaining to user interaction across the hub, the iPhone/Android companion applications and the local & cloud-based web interfaces.
  • Who watches the watcher' Detecting hypervisor introspection from
           unprivileged guests
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Tomasz Tuzel, Mark Bridgman, Joshua Zepf, Tamas K. Lengyel, K.J. TemkinAbstractWe present research on the limitations of detecting atypical activity by a hypervisor from the perspective of a guest domain. Individual instructions which have virtual machine exiting capability were evaluated, using wall timing and kernel thread racing as metrics. Cache-based memory access timing is performed with the Flush + Reload technique. Analysis of the potential methods for detecting non-temporal memory accesses are also discussed. It is found that a guest domain can use these techniques to reliably determine whether instructions or memory regions are being accessed in manner that deviates from normal hypervisor behavior.
  • DroidKex: Fast extraction of ephemeral TLS keys from the memory of android
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Benjamin Taubmann, Omar Alabduljaleel, Hans P. ReiserAbstractFast extraction of ephemeral data from the memory of a running process without affecting the performance of the analyzed program is a problem when the location and data structure layout of the information is not known. In this paper, we introduce DroidKex, an approach for partially reconstructing the semantics of data structures in order to minimize the overhead required for extracting information from the memory of applications. We demonstrate the practicability of our approach by applying it to 86Android applications in order to extract the cryptographic key material of TLS connections.
  • Automated forensic analysis of mobile applications on Android devices
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Xiaodong Lin, Ting Chen, Tong Zhu, Kun Yang, Fengguo WeiAbstractIt is not uncommon that mobile phones are involved in criminal activities, e.g., the surreptitious collection of credit card information. Forensic analysis of mobile applications plays a crucial part in order to gather evidences against criminals. However, traditional forensic approaches, which are based on manual investigation, are not scalable to the large number of mobile applications. On the other hand, dynamic analysis is hard to automate due to the burden of setting up the proper runtime environment to accommodate OS differences and dependent libraries and activate all feasible program paths. We propose a fully automated tool, Fordroid for the forensic analysis of mobile applications on Android. Fordroid conducts inter-component static analysis on Android APKs and builds control flow and data dependency graphs. Furthermore, Fordroid identifies what and where information written in local storage with taint analysis. Data is located by traversing the graphs. This addresses several technique challenges, which include inter-component string propagation, string operations (e.g., append) and API invocations. Also, Fordroid identifies how the information is stored by parsing SQL commands, i.e., the structure of database tables. Finally, we selected 100 random Android applications consisting of 2841 components from four categories for evaluation. Analysis of all apps took 64 h. Fordroid discovered 469 paths in 36 applications that wrote sensitive information (e.g., GPS) to local storage. Furthermore, Fordroid successfully located where the information was written for 458 (98%) paths and identified the structure of all (22) database tables.
  • Deep learning at the shallow end: Malware classification for non-domain
    • Abstract: Publication date: Available online 15 July 2018Source: Digital InvestigationAuthor(s): Quan Le, Oisín Boydell, Brian Mac Namee, Mark ScanlonAbstractCurrent malware detection and classification approaches generally rely on time consuming and knowledge intensive processes to extract patterns (signatures) and behaviors from malware, which are then used for identification. Moreover, these signatures are often limited to local, contiguous sequences within the data whilst ignoring their context in relation to each other and throughout the malware file as a whole. We present a Deep Learning based malware classification approach that requires no expert domain knowledge and is based on a purely data driven approach for complex pattern and feature identification.
  • The Bylock fallacy: An In-depth Analysis of the Bylock Investigations in
    • Abstract: Publication date: Available online 15 June 2018Source: Digital InvestigationAuthor(s): Yasir GokceAbstractBylock is a secure communication app, whose availability Turkish authorities believe was exclusively allocated for the members of the Gülen Movement, a social and religious group which has been regarded by the regime in Turkey as a terrorist organization. The allegation of having been downloaded the Bylock app is currently a sufficient finding for the Turkish judiciary to arrest dozens of thousands of followers as well as other Turkish citizens who have had no link whatsoever with the Movement, on the basis of their alleged ties with a so-called terrorist organization. Examining the legality of the process of retrieving the Bylock metadata as well as the way the data were linked with the individual Bylock users, this article aims at informing the readers about the extent to which digital forensic principles are overlooked in Turkey via a recent case. The procedure for legally obtaining data from an electronic device and for intercepting a private communication under the Turkish Criminal Procedure Code is defined, and how the process of retrieving the Bylock data infringed that procedure is explained. The article also delves deep into what the Turkish data retention law envisages with related to the Bylock case and why the use of Bylock data in judicial proceedings contravenes the law. In a nutshell, this paper exposes the great extent to which the Turkish authorities manipulates digital data in such a way as to incriminate the critics profiled beforehand. All in all, it would be pertinent to advance that the Turkish administrative and judicial authorities joining the acquisition of the Bylock metadata, preparation of the Bylock user lists, and apprehension, detention and conviction of individuals based on those lists clearly infringe the Turkish legislation and commits serious crimes under the Turkish Penal Code.
  • How to decrypt PIN-Based encrypted backup data of Samsung smartphones
    • Abstract: Publication date: Available online 2 June 2018Source: Digital InvestigationAuthor(s): Myungseo Park, Hangi Kim, Jongsung KimAbstractSmartphones, which are a necessity for modern people, have become important to forensic investigators, as they have a lot of user information which can be potential evidences. In order to obtain such evidences, forensic investigators should first extract the data from the smartphone. However, if the smartphone is lost or broken, it would be difficult to collect the data from the phone itself. In this case, the backup data can be very useful because it stores almost all information that the smartphone has. Nevertheless, since the backup data is basically encrypted by applications provided by vendors, the encrypted backup data which acts as anti-forensic is difficult to use. Therefore, it is crucial to decrypt the acquired encrypted backup data in order to effectively use it.In this paper, we propose a method to decrypt the Samsung smartphone backup data which is encrypted by a user input called PIN (Personal Identification Number) and a Samsung backup program called Smart Switch. In particular, we develop algorithms to recover the PIN and to decrypt the PIN-based encrypted backup data as well. We have experimentally verified the PIN recovery backup data decryption up to 9 digits of PIN. Our implementation using a precomputed PIN-table with memory 30.51 GB takes about 11 min to recover a 9-digit PIN. To the best of our knowledge, this is the first result of decrypting PIN-based encrypted backup data of Samsung smartphones.
  • Forensic smartphone analysis using adhesives: Transplantation of Package
           on Package components
    • Abstract: Publication date: Available online 31 May 2018Source: Digital InvestigationAuthor(s): Th Heckmann, K. Markantonakis, D. Naccache, Th SouvignetAbstractInvestigators routinely recover data from mobile devices. In many cases the target device is severely damaged. Events such as airplane crashes, accidents, terrorism or long submersion may bend or crack the device's main board and hence prevent using standard forensic tools. This paper shows how to salvage forensic information when NAND memory, SoC or cryptographic chips are still intact. We do not make any assumptions on the state of the other components. In usual forensic investigations, damaged phone components are analysed using a process called “forensic transplantation”. This procedure consists of unsoldering (or lapping) chips, re-soldering them on a functionnal donor board and rebooting.Package on Package (PoP) component packaging is a new technique allowing manufacturers to stack two silicon chips, e.g. memory, CPU or cryptographic processors. Currently, PoP is widely used by most device manufacturers and in particular by leading brands such as Apple, BlackBerry, Samsung, HTC and Huawei. Unfortunately, forensic transplantation destroys PoP components.This work overcomes this difficulty by introducing a new chip-off analysis method based on High Temperature Thixotropic Thermal Conductive Adhesive (HTTTCA) for gluing the PoP packages to prevent misalignment during the transplantation process. The HTTTCA process allows the investigator to safely unsolder PoP components, which is a crucial step for transplantation. To demonstrate feasibility, we describe in detail an experimental forensic transplantation of a secure mobile phone PoP CPU.
  • Laying foundations for effective machine learning in law enforcement.
           Majura – A labelling schema for child exploitation materials
    • Abstract: Publication date: Available online 31 May 2018Source: Digital InvestigationAuthor(s): Janis Dalins, Yuriy Tyshetskiy, Campbell Wilson, Mark J. Carman, Douglas BoudryAbstractThe health impacts of repeated exposure to distressing concepts such as child exploitation materials (CEM, aka ‘child pornography’) have become a major concern to law enforcement agencies and associated entities. Existing methods for ‘flagging’ materials largely rely upon prior knowledge, whilst predictive methods are unreliable, particularly when compared with equivalent tools used for detecting ‘lawful’ pornography. In this paper we detail the design and implementation of a deep-learning based CEM classifier, leveraging existing pornography detection methods to overcome infrastructure and corpora limitations in this field. Specifically, we further existing research through direct access to numerous contemporary, real-world, annotated cases taken from Australian Federal Police holdings, demonstrating the dangers of overfitting due to the influence of individual users' proclivities. We quantify the performance of skin tone analysis in CEM cases, showing it to be of limited use. We assess the performance of our classifier and show it to be sufficient for use in forensic triage and ‘early warning’ of CEM, but of limited efficacy for categorising against existing scales for measuring child abuse severity.We identify limitations currently faced by researchers and practitioners in this field, whose restricted access to training material is exacerbated by inconsistent and unsuitable annotation schemas. Whilst adequate for their intended use, we show existing schemas to be unsuitable for training machine learning (ML) models, and introduce a new, flexible, objective, and tested annotation schema specifically designed for cross-jurisdictional collaborative use.This work, combined with a world-first ‘illicit data airlock’ project currently under construction, has the potential to bring a ‘ground truth’ dataset and processing facilities to researchers worldwide without compromising quality, safety, ethics and legality.
  • Logical acquisition method based on data migration for Android mobile
    • Abstract: Publication date: Available online 31 May 2018Source: Digital InvestigationAuthor(s): Peijun Feng, Qingbao Li, Ping Zhang, Zhifeng ChenAbstractAndroid dominates the mobile operating system market. The data acquisition method of Android devices has been the focus of research on mobile forensics technology. However, due to the continuous updates of the Android system version and the deployment of security technologies, existing data acquisition methods are limited and difficult to apply to new Android mobile devices. In order to address this problem, we propose a logical acquisition method based on system-level data migration services provided by Android mobile device manufacturers. The experimental result demonstrates that, for unrooted Android mobile devices, the proposed method is superior to existing logical forensic methods in terms of data acquisition capability.
  • Efficient monitoring and forensic analysis via accurate network-attached
           provenance collection with minimal storage overhead
    • Abstract: Publication date: Available online 8 May 2018Source: Digital InvestigationAuthor(s): Yulai Xie, Dan Feng, Xuelong Liao, Leihua QinAbstractProvenance, the history or lineage of an object, has been used to enable efficient forensic analysis in intrusion prevention system to detect intrusion, correlate anomaly, and reduce false alert. Especially for the network-attached environment, it is critical and necessary to accurately capture network context to trace back the intrusion source and identify the system vulnerability. However, most of the existing methods fail to collect accurate and complete network-attached provenance. In addition, how to enable efficient forensic analysis with minimal provenance storage overhead remains a big challenge.This paper proposes a provenance-based monitoring and forensic analysis framework called PDMS that builds upon existing provenance tracking framework. On one hand, it monitors and records every network session, and collects the dependency relationships between files, processes and network sockets. By carefully describing and collecting the network socket information, PDMS can accurately track the data flow in and out of the system. On the other hand, this framework unifies both efficient provenance filtering and query-friendly compression. Evaluation results show that this framework can make accurate and highly efficient forensic analysis with minimal provenance storage overhead.
  • TREDE and VMPOP: Cultivating multi-purpose datasets for digital forensics
           – A Windows registry corpus as an example
    • Abstract: Publication date: Available online 28 April 2018Source: Digital InvestigationAuthor(s): Jungheum ParkAbstractThe demand is rising for publicly available datasets to support studying emerging technologies, performing tool testing, detecting incorrect implementations, and also ensuring the reliability of security and digital forensics related knowledge. While a variety of data is being created on a day-to-day basis in; security, forensics and incident response labs, the created data is often not practical to use or has other limitations. In this situation, a variety of researchers, practitioners and research projects have released valuable datasets acquired from computer systems or digital devices used by actual users or are generated during research activities. Nevertheless, there is still a significant lack of reference data for supporting a range of purposes, and there is also a need to increase the number of publicly available testbeds as well as to improve verifiability as ‘reference’ data. Although existing datasets are useful and valuable, some of them have critical limitations on the verifiability if they are acquired or created without ground truth data. This paper introduces a practical methodology to develop synthetic reference datasets in the field of security and digital forensics. This work's proposal divides the steps for generating a synthetic corpus into two different classes: user-generated and system-generated reference data. In addition, this paper presents a novel framework to assist the development of system-generated data along with a virtualization system and elaborate automated virtual machine control, and then proceeds to perform a proof-of-concept implementation. Finally, this work demonstrates that the proposed concepts are feasible and effective through practical deployment and then evaluate its potential values.
  • Navigating the Windows Mail database
    • Abstract: Publication date: Available online 21 March 2018Source: Digital InvestigationAuthor(s): Howard ChiversAbstractThe Windows Mail application in Windows 10 uses an ESE database to store messages, appointments and related data; however, field (column) names used to identify these records are hexadecimal property tags, many of which are undocumented. To support forensic analysis a series of experiments were carried out to diagnose the function of these tags, and this work resulted in a body of related information about the Mail application. This paper documents property tags that have been diagnosed, and presents how Windows Mail artifacts recovered from the ESE store.vol database can be interpreted, including how the paths of file recorded by the Mail system are derived from database records. We also present example emails and appointment records that illustrate forensic issues in the interpretation of message and appointment records, and show how additional information can be obtained by associating these records with other information in the ESE database.
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-