The State of Digital Forensic Tools

Subject: Tech & Engineering
Pages: 17
Words: 4531
Reading time:
17 min
Study level: College

Introduction

The exponential developments that mark the technological era have brought about varied effects on the global community. What people refer to as the ‘digital age/era’ remains characterized by an improvement in general efficiency, improvement of business processes, enhancement of image, reduction in costs of transaction by cutting off intermediaries, and better customer services amongst other possible parameters. Socially, the invention of user generated content (UGC) sites such as Facebook, My space, and Twitter has broadened the social parameters to the extent of the world being referred to as a ‘Global Village’. Developments, inventions, and discoveries are being made in medicine, astronomy, meteorology, and other critical fields as a result of this advance in technology, particularly, computer technology (Raul, Volpe, & Meyer, 2001, p. 849). However, as with every other positive action, there is a guaranteed negative reaction. For computer technology, this takes the form of health disorders such as myopia, repetitive stress injury, social networks (e.g. facebook) addiction (Williams, 2006, p. 14). It also takes the form of memory disruption by magnetic fields and cybercrimes such as hacking, computer hardware and software theft, breaching of network securities, cyber-terrorism and propagation of extremist agendas, identity theft, e-commerce fraud, childhood pornography, piracy of movies, music, and other copyrighted intellectual property, cyber-defamation, cyber-stalking, and cyber-harassment among others. Consequently, investigations, trials and sanctions have been introduced for computer related crimes such as source code attacks, obscenity, failure to comply with a Controller’s directions or requirements, access of designated protected systems, misrepresentation, confidentiality breaches, publication of false digital certificates, and availing one’s digital signature for fraudulent purposes among other offences.

The new field of law (cybercrimes) comes with specific requirements in terms of the investigation procedure. The most crucial off all these concerns the Laws of evidence. As with the traditional Evidence Laws that govern physical crimes, virtual crimes evidence laws must conform to the legally required procedures and specifications for them to be admissible in a judicial review. Failure to adhere to such stringent requirements often results to the repudiation, or rebuttal of evidence, at the expense of a person’s freedom, or employment (Carrier, 2003, p. 16). It therefore follows that the digital forensic investigators charged with the responsibility of generating evidence for a case do a satisfactory job that will yield just adjudications when presented for judicial review. However, this may not always be possible due to the complications resulting from the availability of both open source and closed source software in the market for use in acquisition, analysis, and presentation of evidence data. This paper will examine in detail the various complications that arise from this coexistence, and provide recommendations for future research in the area.

Objectives/Aims

The paper intends to add onto existing research by drawing out a comprehensive outline of the various features and characteristics of both open source and closed source software tools. Such an outline will discuss the usability and relevance of the tools in digital forensic investigations, while also listing their various shortcomings. It will also include various propositions of future developments in the field. The main aim of this research is to equip software developers with knowledge of the legal component of their profession. Therefore, as they continue to develop new software, and update the existing ones, they will know how relevant certain features and characteristics are for usability in judicial review. I hope that such knowledge will empower their decision-making as pertaining to certain sensitive matters like publication of specifications and procedures for public review. I also hope that this expose will instigate further innovative competition among these software developers, as they seek to divert to open source software, which promotes the legal requirements of digital evidence.

Literature Review

I chose to use literature review or an analysis of past research on the subject matter of this study because it is still a fledgling field in research, whereby quantitative surveys are yet to be carried out on the relevant variables. At this stage, qualitative accounts or findings are the most informative of all the available empirical evidence. These too are neither extensive nor comprehensive, but at least some exist unlike numerical statistics of the relationship of variables. Therefore, I studied various literatures that have been established on the effectiveness and popularity, as well as conformity to legal requirements of digital forensic tools. The review provides a powerful theoretical background that can inform future research, especially by identifying some of the subsets of possible future research areas. However, due to the minimal progress that has been made in terms of connecting the legal enterprise with the technological aspects, the findings of this research are not material for generalization. Discussed concepts are not presented in a conclusive form, therefore, leaving room for future amendments as per changes in legislative measures, and DFTs’ construction. Nevertheless, it is important to note here that the findings reported are factual and optimally objective. Thus, they can be applied across the board on all matters concerning legal issues affecting the development of Digital forensic tools.

Just as expounded in the introduction above, technology has evolved substantially and by extension, related concepts such as health and security risks. In this evolution, health risks were the first to be identified and with them came extensive researches, and proposed reforms. Security risks were identified next and research was conducted in lieu with the problem at hand. Digital forensics is a relatively new field, one among many that have since been developed to cope with these security issues. Others that preceded digital forensics include IT security measures such as installation of access controls, encryptions, biometrics, network security, and security algorithms among others (Ieong, 2006, p. 29). Consequently, these areas have been studied extensively, and a lot of research has been conducted on them. However, digital forensic is just emerging and relatively fewer studies have been conducted on it as a field, except maybe it being mentioned in passing in some of the other studies or even rarer, individual specialists’ personal reports on the effectiveness of various tools, procedures and methodologies. Incorporating a legal aspect on digital forensic tools development alienates this field even further from conducted research; leaving one with the only option of piecing together related information from various disconnected studies. That means that this paper’s literature review will include study reports that seem unrelated in their main content to the subject matter at hand, but which contain bits and pieces of relevant information for this study.

The paper has reviewed research reports from the Emerald online Resource center. The US Cybercrime Units on the Worldwide Web by Sameer Hinduja and Joseph Schafer provides a report on the number or cybercrime units available on the world wide web. The authors use a sample of 88 (Hinduja, & Schafer, 2009, p. 280). The units are comprised of law enforcement officers working in conjunction with the community to get tips and information on local offender. They also consist of legal practitioners who obtain pre-investigative advice on the required evidence for solving a particular case thus saving on costs and time while being relevant. Moreover, the law officers assume the responsibility of apprehending, detaining and executing of punishments to offenders. The research also deduces the type of information these law enforcers post on their websites seeking to identify the procedures of handling computer-related crimes from such sites. The article A Survey of Intrusion Detection and Prevention Systems by Ahmed Patel, Qais Qassim, and Christopher Wills addresses the issue of rising rates of virtual criminal activities and the resultant need for use of stringent security measures to deal with the problem. Among the measures suggested are firewalls, intrusion detection (IDs) and prevention (IPs) systems, authentication, encryption, and other hardware and software solutions. The cause of these problems is taken to be; wired and wireless communication network developments, the internet, web applications and general computing advances (Patel, Qassim, & Wills, 2010, p. 285).

The paper suggests the integration of ‘new’ techniques such as machine learning and automation as the way forward to counter both known and unknown threats and issues. Thirdly, the Forensics: Defining a Research Agenda by Kara Nance, Brian Hay, and Matt Bishop provides a comprehensive report on the various possible fields of research in digital forensics. The paper asserts that most of the existing solutions to digital crimes came to be as a reaction to various threats and deviant behaviors, and not through preplanned or developmental research, and therein lays their problem (Nance, Hay, & Bishop, 2009, p. 4). They are retrogressive and not proactive. To counter this, the paper comes up with various fields for potential research including evidence models, legal frameworks and technical/ technological aspects of digital forensic research. FORZA-Digital Forensic Investigation Framework That Incorporate Legal Issues by Ricci S.C. Ieong provides a comprehensive overview of what digital forensics is, the definition of digital forensic investigation, and makes a proposition of fundamental principles that should govern digital forensic investigations. It also discusses the importance of a conjunction between information technologists, legal practitioners, and investigators for effective development of efficient DFTs, and proposes a technical independent framework to guide such a relationship (Ieong, 2006, p. 33). It highlights eight tasks worth undertaking in a productive investigation and proposes six critical questions to ask at each stage of the investigation for optimal results in terms of evidence generated. The authors illustrate the usability of their proposed framework with a ‘web hacking’ example. Further Malware: the New Legal Risk by Verine Etsebeth is a South African study, conducted locally in South Africa on the prominence of Malware as the main cause of computer insecurity. This possess threats to their information assets, systems, and resources and is mostly manifested in forms of; firewall breaches, attacks on Virtual Private Networks (VPNs) and the manipulation of digital signatures. This study aims at promoting awareness to its largely ignorant citizenry who are unaware of the legal bearings of such practices as well as solutions (Etsebeth, 2007, p. 538).

It discusses the matter of legal liability especially when one is vicariously liable, and outlines the law as far as malware is concerned in South Africa. It is the first study of its kind in South Africa and it seeks to provide insight to corporate societies on how to avoid liability, as well as how to deal with it in the event that it occurs. The sixth report is the Forensics Tools: the Legal Argument by Brian Carrier who provides a very comprehensive paper that specifically deals with the legal issues associated with the development of Digital Forensic Tools (DFTs). The author engages in a conclusive discussion of the step by step process of a digital forensic investigation, the legal issues involved, including ‘Daubert’ Guidelines and Federal Rules of Evidence, the analysis of the legal requirements for DFTs and finally, a comparison between open source and closed source forensic tools (Carrier, 2003, p. 9). He also highlights the superiority of open source software, especially in terms of admissibility of evidence, and provides a conciliatory option for both open and closed source software developers, where both can maintain a competitive edge in the market by focusing on the features, user interface, and support systems for variability and uniqueness in production. The author notes that such software is very crucial as it is used to prove innocence, sustain or lose employment, and/ or convict criminals. Developers should therefore restrain from exclusively marketing techniques and practices while creating and updating them, but instead focus on improving the quality of the software they produce.

He posits that this can only be done through publication of specifications, procedures, designs and source codes especially by closed source software developers who mostly produce software for commercial purposes and protect their products by avoiding publication. He also proposes that open source software developers use language to explain codes, not just publish codes that at best appear as jargon to the rest of the public. The report Towards an International Road-Map for Cyber-security by Christine Sund provides a holistic view to the growing distrust of internet applications especially because of the raging computer insecurity. Christine also touches on the issue of developing countries showing how they too are trying to cope with both the advancing technology and the growing risks associated with this advancement. She touches on their legal reforms instituted in a bid try to manage cyber-crimes (Sund, 2007, p. 573). It mentions various stakeholders who can be looked upon to manage the growing distrust of computer systems and then touches on corporate measures that are an attempt to allay employees’ shyness of using computers.

It also touches on the issue of technology dependence. Finally, Christine identifies various probable solutions to computer insecurity that can be launched at organization level. The Personal Curation of Digital Objects: a Lifecycle Approach by Peter Williams, Jeremy Leighton John, and Ian Rowland is an article focusing on how people create, use, manage, organize and dispose of their digital products, including information storage devices. It addresses the possibility of individuals finding personal ways of storing important information for future posterity (Williams, Leighton, & Rowland, 2009, p. 349). Personal information management is discussed at length in comparison to how institutions and corporations store their important data and the idea of personal digital archives is sold out well. The ninth and last article is Information Warfare: Peering inside Pandora’s Postmodern Box by Blaise Cronin, a very interesting article, which is actually a lecture presented by the author on Information Warfare (IW) and the effect it has on the democratization of real war (Cronin, 2001, p. 288). It deals with this topic from both the offensive and the defensive perspectives and even includes information terrorism and its effects on individuals and society.

Findings

Reviewing of the above literature informed my final opinion on legal issues that influence the development of digital forensic tools. Firstly, the paper begins by noting the plethora of legal issues that influence the field of digital forensics. In general, this field deals with Property Law, Contract Law, Taxation Law, Communications Law, Criminal Procedures, Penal Codes, Evidence Laws, Constitutional Law, Cyber-Crime, Cyber war, Tort Law and other legal aspects (Nance, Hay, & Bishop, 2009, p. 3). The laws inform the practice of digital forensics in terms of the parameters/scope and method of execution of processes. They are what give digital forensics an agenda as it is their breach that digital forensic investigators are called upon to prove. As such, these investigations must conform to these laws and others for them to be accepted as admissible before a court of justice. Most countries tend to insert computer crime legislations within their existing legislation (Hinduja, & Schafer, 2009, p. 290). This can only go so far in the prevention and management of these crimes. In most cases, such states find that they need an autonomous body of laws to direct the handling of computer crimes within and without their jurisdictions. Jurisdiction is another matter that is crucial to legal enforcers. The nature of computer crimes has become increasingly confounding and complex due to the presence of the internet. Prudent criminals tend to use obfuscation techniques to avoid detection and apprehension (Cronin, 2001, p. 292). They target individuals from multiple countries or states, hence transcend various jurisdictional territories and this confounds investigative measures.

However, the main interest of this study is based on Evidence Laws that govern digital forensics. Digital forensics has existed since computers could store data that could be used as evidence and has advanced from the first generation use of En Case software to today’s Access Data FTK3 (Carrier, 2003, p. 16). Initially, only government agencies applied forensics, now the commercial sector is in on it too. The other major development has been in the move from custom and propriety software to today has specialized software. For the purposes of this study, digital forensics is taken to mean, “A process, not an elephant, and not just a single process but a group of tasks and processes in an investigation” (Pollitt, & Brill, 2006, p. 4). By extension, a Digital forensics investigation is “the process used to determine and relate extracted information and digital evidence to establish factual information for judicial review” (Ieong, 2006, p. 31).

For such an investigation to take place an investigator needs tools that he will use for the extraction and analysis of information, and since the data in question is acquire with the intention of being presented for judicial review as evidence in a case, it needs to be acquired in an acceptable manner. This is where the law comes in. Law determines what is acceptable or not. Initially, the method used to determine whether the tool used was acceptable was ‘Frye’s Test,’ this test was conducted on scientific evidence, which was presented for judicial review to determine its reliability (Carrier, 2003, p. 5). It was conducted through an analysis of the peer reviews of the scientific community, found in related journals. However, the reason why this method is not applicable in digital forensics follows its being a relatively new field. Consequently, there are not so many journals published about various digital forensic tools and procedures on their effectiveness or reliability. The International Journal of Digital Evidence is very recent. Consequently, there was need to establish a better test. This was done in the Daubert vs. Merrel Dow Pharmaceuticals (1993) ruling. Since then, the guidelines that direct the admissibility of digital evidence, which is largely informed by the tools used to procure it, are known as ‘Daubert Guidelines’ (Carrier, 2003, p. 8).

However, the basic requirements for a DFT are relevance and reliability. As noted before, since these tools are responsible for either freedom or incarceration of an individual, they need to conform to legal requirements, now known as ‘Daubert’ guidelines. Digital forensic investigations are conducted on matters such as computer intrusion, unauthorized use of corporate computers, child pornography, and any physical crimes whose suspect had access to a computer (Cronin, 2001, p. 283). There are various tools used in digital forensics investigations, among them, extraction tools. These are used in processing data and they specifically pick out a subset of the data they are analyzing. For instance, in a file system analysis, files and directories are analyzed for content survey. The extraction tool can then pick out the time of last access as a subset. A second type of tools is presentation tools. These organize the data that is obtained by extraction tools into constructive categories. An example is the Modified Access and Change times (MAC) which creates a timeframe for various activities performed by the suspect on his hard disk (Carrier, 2003, p. 10). If an investigator uses an open source extraction too, he can switch to a closed source extraction too for presentation, so long as its design is published. Standardization of extraction companies would not inhibit its competitive edge. Instead, it would lead to more innovation in terms of creating better-closed source presentation tools. The first stage of an investigation is the acquisition. This saves the original state of a digital system for later analysis and investigators strive to capture all digital values as it is impossible to tell at this stage what will be useful and what will not.

This includes both used and unused spaces of a computer hard drive and DFTs are used to copy the data from the original device that belongs to a suspect to a trusted device for later evaluation. The tools used must limit any modifications made to a suspect’s device while copying all data, for them to be considered reliable. The second stage is the analysis of the data acquired. This is done to identify the evidence incorporated within the data. The three types of evidence deduced are inculpatory evidence, which supports a particular theory, exculpatory evidence, which contradicts a particular theory, and evidence of tampering, which suggests that the system was tampered with in an effort to prevent detection of some information. During analysis, file and directory contents are examined, data that was deleted is recovered and the findings are presented in a constructive and comprehensive format. It is important that the exact copy of the original be used at this stage, and an ND5 checksum can be used to validate this. It is also imperative that the tests display all the data that is present in the image. “The last stage is Presentation, which is a report on the conclusions and corresponding evidence from an investigation” (Kenneally, 2000, p. 131). How it is made depends on the context it is to be made. Contexts vary, in corporate settings; in attendance is the general counsel, HR department, and executives and privacy laws as well as corporate policies influence the presentation. Legal contexts have judges, and juries, with lawyers having to evaluate the evidence before presentation.

To guarantee the relevance of forensic evidence to a specific case, there is need for cooperation between the investigator and the legal practitioner, whereby the legal practitioner will offer advice on what to look for during extraction and analysis. Reliability is determined by a judge during a pretrial ‘Daubert’ hearing (Kenneally, 2000, p. 133). The judge determines whether methodology and techniques applied during extraction and analysis of the evidence are ‘sound’, and by extension, reliable. The Daubert guidelines inform this determination. They are: testing, error rate, publication, and acceptance. The judge is intent in finding out whether the procedure use can be tested for accuracy, and if so, whether this has been done. Tests can result in either False Negatives or False positives. False negatives tests are meant to ascertain that the tool used copied all the data that was available to the destination device. This result is usually more difficult to prove and can only be validated using a second tool. When testing a tool, the general aim is to prove that, it can handle all the situations that it is capable of creating (Sund, 2007, p. 579). Presently there is no standard testing methodology and investigators identify and report bugs to developers for updating of systems. With access to a source code, testing is made easier, as it is done through code reviews and designs. By identifying codes that have been changed or omitted in the current edition of the software, one can tell which code was a bug showing how it has been updated. For open source software, errors only occur during programming. They are also easier to calculate as they are published and one can draw conclusions from making comparisons between previous and current source codes of software.

However, in closed source software such as NIST, errors are common, and they mostly occur due to a lack of knowledge and understanding of the software’s entire specifications (Carrier, 2003, p. 9). These errors can be in the form of either, tool implementation errors, which are caused by a presence of bugs in the code, and the rate varies with their numbers and severity. The second type of errors, abstraction errors, stem from tool making decisions that are opposed to the original intended use for the software. These errors are also difficult to calculate because closed source software developers are adamant about publishing their source codes. This is because they produce software for commercial purposes and believe that divulging such basic information is poisonous if they are trying to maintain a competitive edge. There is no standard way of calculating error rates and closed source developers often cite user market shares as proof of effectiveness. However, this is neither logical nor accurate because the numbers of people buying the software do not indicate either the frequency of usage, or the capability of the software to handle complex data (Carrier, 2003, p. 19). The error rate must account for both simple and complex procedures especially where evidence of tampering is present. Procedures need to be availed to the public (published) and subjected to review, i.e. specialists in the same field must pass judgment on the procedures invented by colleagues (Kenneally, 2000, p. 129). Both the file system analysis procedures and data recovery procedures need to be published do that a judge can decide based on experts’ testimonials on whether the applied procedure, including the tools used in the generation of evidence, is admissible in court.

It is a legal requirement that the associated scientific community reviews and evaluates published procedures. By extension, publication is essential for this standard to be met. The idea of closed source developers citing the number of users is again unacceptable because numbers do not somehow explain the procedures these software use. It is possible that all those users prefer that particular software for other non-procedural values such as interface. Only when procedural details have been published, and they become a determinant in informing buying-decisions can numbers is used to convey acceptance.

Practical implications and Recommendations

These findings imply that most software developers are tending to lean towards production of software for commercial purposes, hence producing closed source software programs. Consequently, there is not a lot of development or improvement of original/traditional open source software, yet these are the ones found reliable or sound for use in judicial review. However, this is not a very big problem and the solution lies in publication of source codes by closed source software developers. That would lead to admissibility of evidence that is generated using such software and justice would be meted out accordingly in case of a computer-crime related dispute. Unless closed source software abides by these requirements, they can never meet the required standards to warrant admissibility in court for digital evidence they generate.

Brian Carrier also recommends that although open source developers publish their source codes, they need to translate these to comprehensive languages that can be easily understood by non-professionals, thus enabling statistics to inform purchasing decisions. Finally, production of open source extraction tools (whose procedures, design and specifications have been published and can be monitored and evaluated) to be used with closed source tools can prove even more competitive than the current situation where every developer is insisting on closed source software. This follows because the presentation tools for instance are closed source shall be innovative competition to pioneer dominating in inventing dynamic features that are more marketable in relation to one’s rivals.

Conclusion

The field of Digital Forensics is still at a budding stage and so the systemic complications being experienced such as refusal of commercial software developers to publish source code information are inevitable at this stage. However, it is also necessary that the software developers see the bigger picture and stop being selfish. The software they are developing can serve a greater cause than simply being used to enrich oneself. The original idea when inventing this software was to use it for the propagation of justice. This should never be lost. Ironically, it is in such propagation that better financial opportunities lie. For instance, more competitive market arenas can be created by standardization of tools through publication of source details, as this will force innovators and developer to think of new ways of benefiting financially. Such publication will also promote justice in computer criminology, hence leveling the ground for equal competition on an even ground.

References

Carrier, B. (2003). Defining Digital Forensics Examination and Analysis Tools Using Abstraction Layers. International Journal of Digital Evidence , 14-30.

Carrier, B. (2003). Open Source Digital Forensics Tools The Legal Argument. IEEE Computer Magazine , 1-11.

Cronin, B. (2001). Information warfare: peering inside Pandora’s postmodern box. IEEE Computer magazine , 279-294.

Etsebeth, V. (2007). Malware: the new legal risk. The Electronic Library , 534-542.

Hinduja, S., & Schafer, J. (2009). US cybercrime units on the world wide web. Policing: An International Journal of Police Strategies & Management , 278-296.

Ieong, R. (2006). FORZA – Digital forensics investigation framework that incorporate legal issues. Elseiver Digital Investigations , 29-36.

Kenneally, E. (2000). Gatekeeping Out Of The Box: Open Source Software As A Mechanism To Assess Reliability For Digital Evidence. Virginia Journal of Law and Technology , 128-135.

Nance, K., Hay, B., & Bishop, M. (2009). Digital Forensics: Defining a Research Agenda. Proceedings of the 42nd Hawaii International Conference on System Sciences (pp. 1-6). Hawaii: Stake Reseach Report.

Patel, A., Qassim, Q., & Wills, C. (2010). A survey of intrusion detection and prevention systems. Information Management & Computer Security , 277-290.

Pollitt, M., & Brill, A. (2006). The evolution of computer forensic best practices: an update on programs and publications. Journal of Digital Forensic Practice , 3-11.

Raul, A., Volpe, F., & Meyer, G. (2001). Liability for computer glitches and online security lapses. New York: BNA Electronic Commerce Law Report, p. 849.

Sund, C. (2007). Towards an international road-map for cybersecurity. Online Information Review , 566-582.

Williams, P. (2006). MySpace, facebook attract online predators: experts say be careful what you post online – somebody is always watching. Jornal of International Digital Forensics , 12-18.

Williams, P., Leighton, J., & Rowland, I. (2009). The personal curation of digital objects A lifecycle approach. Aslib Proceedings: New Information (pp. 340-363). Washington DC: Emerald Group Publishing Limited.