Forensic Computer Analysis: An Introduction

Forensic Computer Analysis: An Introduction

di Dan Farmer – Wietse Venema

Dr. Dobb’s Journal September 2000

Forensic Computer Analysis: An Introduction.
Reconstructing past events.
Becoming a Digital Detective
Frankenstein.
Beauty Is More Than Skin Deep.
What’s Going On?
The Zero Effect
For More Information

This is the first in a series of articles on forensic computer analysis. Our prime goal is to illustrate the reconstruction of past events with as little distortion or bias as possible. We won’t be discussing real crimes here, however (other than a few technical homicides inflicted on code by vendors); indeed, we will only rarely be discussing computer crimes at all. If we were choosing titles for this discussion, “virtual archeology,” “time traveling,” or “digital detective work” could all be used fairly interchangeably.

However, many analogies can be drawn from the physical to the virtual realms of detective work __ anyone who has seen a slaying on a police show can probably give a reasonably good account of the initial steps in an investigation. First, you might protect and isolate the crime scene from outside disturbances. Next comes recording the area via photographs and note taking. Finally, a search is conducted to collect and package any evidence found.

The digital analogs to these steps are precisely what we suggest when faced with a computer investigation. We recently released the Coroner’s Toolkit, a set of UNIX data collection and analysis tools that are freely available at http://www.fish.com/forensics/and http://www.porcupine.org/forensics/. While we will be referencing our software and commenting on some of the code, the intended focus of this series is the thought process occurring behind the scenes, not simply the coding details or how to use the programs.
Becoming a Digital Detective

If you want to solve a computer mystery effectively, then you need to examine the system as a detective, not as a user. Fortunately for programmers, many of the skills required are the same used when creating or debugging software __ logical thinking, uncovering and understanding the cause and effect of a computer’s actions, and possessing an open mind. There are a few significant differences between chasing down a bug and trying to solve a mystery, however. As a programmer, you’re often only working against yourself __ trying to fix problems of your own making. Debugging takes time, testing, and repeated effort to ensure that the bugs are squashed.

Solving a computer crime is more akin to working against rival programmers who are attempting to subvert your code and hide their crafty work.

Other than that, solving computer mysteries has much in common with its physical counterpart. You generally don’t have a lot of time to solve a mystery. Evidence vanishes over time, either as the result of normal system activity or as the result of acts by users. Likewise, every step that you take destroys information, so whatever you do has to be right the first time or valuable information is lost. Repetition is not an option.

Fortunately, you can have one major advantage over your opponent __ knowing your system better than anyone else. Many computer forensic mysteries are fairly basic and are caused by relatively inexperienced people.

More than ever, understanding cause and effect are absolutely crucial __ any opponent has lots of opportunities to change or subvert your machine. An in_depth technical understanding of the system and what it does on a day_to_day basis is vital. How can you trust a compromised system to produce valid data and results?

Let’s start with a simple example. Here are some of the basic steps involved when running a command:

The shell first parses what is typed in, then forks and executes the command (environment and path variables can have a significant effect on exactly which command gets executed with what arguments, libraries, and so on).

The kernel then needs to validate that it has an executable. It talks to the media device driver, which then speaks to the medium itself. The inode is accessed and the kernel reads the file headers to verify that it can execute.

Memory is allocated for the program and the text, data, and stack regions are read.

Unless the file is a statically linked executable (that is, fully contains all the necessary code to execute), appropriate dynamic libraries are read in as needed.

The command is finally executed.

As you can see, there are a lot of places in the system that work together when a command is executed __ and a lot of places where things can be compromised, go wrong, or otherwise be of forensic interest. If you are working against someone and you cannot trust these components, is there any hope of drawing valid conclusions?

Much of the challenge of forensic computing is trying to make sense of a system when the data you’re looking at is of uncertain heritage. In this series, we’ll focus on several aspects of this process and how to draw valid conclusions in spite of the problems associated with it.
Frankenstein

Given a collection of logfiles, wouldn’t it be great if you could use those records to replay past events, just like watching a video tape? Of course, logfiles record system activity with relatively low time resolution. The reconstructed behavior would be a little stiff and jerky, like the monster in Frankenstein.

Most computer systems do not record sufficient information for Frankenstein_like experiments.

Still, computers produce numerous logs in the course of their activities. For example, UNIX systems routinely maintain records of logins and logouts, of commands executed on the system, and of network connections made to the system. Individual subsystems maintain their own logging. Mail delivery software, for instance, maintains a record of delivery attempts, and the cron daemon for unattended command execution has its own activity logs. Some privileged commands, such as su (switch userid), log every invocation regardless of its success or failure.

Each individual logfile gives its own limited view of what happened on a system, and that view may be completely wrong when an intruder has had opportunity to tamper with the record.

The next couple of paragraphs focus on how multiple sources of information must be correlated before one can make a sensible judgment. The examples are UNIX specific, but that is hardly relevant.

Figure 1 shows information about a login session from three different sources: from TCP Wrapper logging (see “TCP Wrapper, Network Monitoring, Access Control, and Booby Traps,” by Wietse Venema, UNIX Security Symposium III Proceedings, 1992), from login accounting, and from process accounting. Each source of information is shown at a different indentation level. Time proceeds from top to bottom.

The TCP wrapper logging (outer indentation level) shows that on May 25, at 10:12:46 local time, machine spike received a telnet connection from machine hades.porcupine.org. The TCP Wrapper logs connection events only, so there is no corresponding record for the end of the telnet connection.

The “last” command output (middle indentation level) shows that user wietse was logged in on port ttyp1 from host hades .porcupine.org and that the login session lasted from 10:12 until 10:13, for a total amount of time of less than one minute. For convenience, the record is shown at the two times that correspond with the beginning and the end of the login session.

Output from the lastcomm command (inner indentation level) shows what commands user wietse executed, how much CPU time each command consumed, and at what time each command started. The order of the records is the order in which each process terminated. The last two records were written at the end of the login session, when the command interpreter (csh) and the telnet server (telnetd) terminated.

How trustworthy is information from logfiles when an intruder has had opportunity to tamper with the record? This question will come up again and again in this series. Intruders routinely attempt to cover their tracks. For example, it is common to find process accounting records from user activity that have no corresponding login accounting records. Fortunately, perfect forgeries are still rare.

And a job done too well should raise suspicion, too (“What do you mean, our web server was completely idle for 10 hours?”).

The records in the example give a nice consistent picture: Someone connects to a machine, logs in, executes a few commands, and goes away. This is the kind of logging that you should expect to find for login sessions. Each record by itself does not prove that an event actually happened. Nor does the absence of a record prove that something didn’t happen. But when the picture is consistent across multiple sources of information, it becomes more and more plausible that someone logged into wietse’s account at the indicated time. Information is power, and when you’re investigating an incident, you just can’t have too much of it.
Beauty Is More Than Skin Deep

As Figure 2 illustrates, computers present us with layers and layers of illusions. The purpose ofthese illusions is to make computers more convenient to use than the bare hardware from which they are built. But the same illusions impact investigations in several interesting ways.

Computer file systems typically store files as contiguous sequences of bytes, and organize files within a directory hierarchy. In addition to names and contents, files and directories have attributes such as ownership, access permissions, time of last modification, and so on.

The perception of files and directories with attributes is one of the illusions that computer systems create for us, just like the underlying illusion of data blocks and metadata (inode) blocks. In reality, computer file systems allocate space from a linear array of equal_size disk blocks, and reserve some of that storage capacity for their own purposes. However, the illusion of files and of directories with attributes is much more useful for application programs and their users.

And the perception of a linear array of equal_sized disk blocks is an illusion as well. Real disks are made up from heads and platters. They store information as magnetic domains and reserve some of the storage capacity for their own purposes. However, the illusion of a linear sequence of equal_sized disk blocks is much more useful for the implementation of file systems.

The layering of illusions limits how much you can trust information from a computer file system.

Only the physical level with the magnetic domains is real; this level is also the least accessible. The abstractions of disk blocks, contiguous files, and of directory hierarchies are illusions created by software that may or may not have been tampered with. The more levels of abstraction, the more opportunities for mistakes.

The layering of illusions also limits the possibilities for both data destruction and data recovery.

Deleting a file from the file system is relatively easy, but is not sufficient to destroy its contents or attributes. Information about the deleted file is still present in the disk blocks that were once allocated to that file. In future articles, we will present the Lazarus program and other tools that recover files and file attributes from unallocated disk blocks.

Peter Gutmann wrote an excellent paper on the problem of data destruction (see “Secure Deletion of Data from Magnetic and Solid_State Memory,” Sixth USENIX Security Symposium Proceedings, 1996, . Gutmann’s concern is with the security of sensitive information such as cryptographic keys and unencrypted data. Unless you use equipment that protects data by self destruction, it is necessary to erase sensitive data adequately to prevent it from falling into the wrong hands.

Destroying information turns out to be surprisingly difficult. Memory chips can be read even after a machine is turned off. Although designed to only read 1s and 0s, memory chips have undocumented diagnostic modes that allow access to tiny leftover fragments of bits. Data on a magnetic disk can be recovered even after it is overwritten multiple times. Although disk drives are designed to only read the 1s and 0s that were written last, traces of older magnetic patterns still exist on the physical media. You can find spectacular images of semiconductors and of magnetic patterns online in the Digital Instruments Nanotheater.

In the future, we will use the term “electronic dumpster diving” when talking about the recovery of partially destroyed information. The challenge of electronic dumpster diving is to make sense out of trash. Without assistance from a file system, disk blocks are no longer grouped together into more useful objects such as files, so reconstruction can be like solving a puzzle. As more and more layers of illusion are impacted by data destruction, the remaining information becomes more and more ambiguous.

This leads us to the paradoxical conclusion that stored information can be volatile and persistent at the same time.

The important thing to know is this: The volatility of stored information is largely due to the abstractions that make the information useful.
What’s Going On?

We’ve touched upon how long data stays around after being deleted, but how about the bits that are currently being used on your system? Intruders and problems are most likely going to be interested in what is on your computer, not what isn’t. We collected some data on various UNIX servers on the Internet to see how frequently their files were accessed. While this isn’t meant to be the last word on the subject (systems may vary widely in the numbers of files they have, the usage patterns, and so on), we still feel that they’re indicative of system behavior.

Table 1 shows file utilization patterns of typical systems. These numbers are obtained by gathering timestamps on files using a modified version of the MACtime tool, which will be discussed more fully in the next installment.

The vast majority of files on two fairly typical web servers have not been used at all in the last year.

What is important to note here is that, even on an extraordinarily heavily used (and extensively customized) Usenet news server, less than 10 percent of the files were touched within the last 30 days. Whether they are seldom used or archived data; programs that users install, test, and forget about; system programs and configuration files that are never touched; archives of mail, news, and data; and so on, there are lots of files gathering electronic dust on a typical system.

Similar patterns emerge from PCs that run Windows 9x, NT, and the like __ our data shows that often over 90 percent of their files haven’t been touched in the last year. Microsoft systems are often even more one_dimensional in their functionality __ being primarily used for word processing, spreadsheets, and the like and not as Internet servers or general_purpose systems. This is the case even though Microsoft operating systems are as complex (if not significantly more so) than the servers in Table 1.

Why is this? Even a 1_MIPS machine could generate enough new data to fill a terrabyte drive in a very short time. Computers are busy enough, certainly, but most activity accesses the same data, programs, and other parts of the machine over and over again. In practice, every computer has substantial numbers of files and commands that are rarely __ if ever __ used. And any modern computer will have at least hundreds __ if not thousands __ of commands at its disposal. Computers have evolved over time to either be all things for all users or to be full of specialized little programs that no one has the heart to throw away. Fine, you may say, but why does any of this matter?

Now, a few words on looking for things. When you go looking for something specific, your chances of finding it are very bad. Because, of all the things in the world, you’re only looking for one of them. When you go looking for anything at all, your chances of finding it are very good. Because, of all the things in the world, you’re sure to find some of them.

__ Darryl Zero
The Zero Effect

When looking for specific items, not only is the task difficult but often painfully time consuming. And, for example, a search for a lost sock will usually turn up many other hidden and lost items before uncovering your target.

So rather than looking for a particular thing on a computer, you’ll want to examine everything.

Intruders and other miscreants (almost by definition) display atypical behavior __ utilizing odd commands, looking in strange places, or doing things that normal users don’t. And problems usually consist of either anomalous actions or typical behavior gone bad __ either of which cut down the search tremendously. Finding the results of this activity is especially simple if you know what the system normally looks like. In general, if actions and changes in data are indeed concentrated, when the locus changes, it is often easy to spot.

Intruders also often modify or damage the systems they’re on, whether it is simply eliminating or modifying audit or log evidence, putting in back doors for easy access to the system later on, or engaging in artistic WWW defacement.

Interestingly, however, destroying or modifying data to hide evidence can leave significant marks as well __ sometimes more telling than if they had left the system alone. Consider the physical world __ anyone walking on a snowy walkway will obviously leave footprints. If you see the walkway clear of any tracks, it might make you suspicious: “Did someone brush away all traces of activity?” As we all know from programming experience, it is significantly easier to find a problem or bug if we know something is wrong than if you’re simply presented with a program. Like the old joke __ “It’s quiet.” “Yeah, too quiet!”, seeing a system devoid of activity should make your forensic hackles rise.

In future installments, we will be discussing the analysis of unknown programs, the usefulness of the mtime, atime, and ctime file attributes, and the recovery and reconstruction of removed and unallocated data on UNIX filesystems.
For More Information

Roueché, Berton. The Medical Detectives, Truman Talley Books, 1947/1991.

Saferstein, Richard. Criminalistics: An Introduction to Forensic Science, Prentice_Hall, 1998.

Possibly Related Posts: