hhp.JPG (9494 bytes)


– Test Your System


By Overlord, June, 1998. This file was put together to check the security of your network. Work through each hacking technique on your system to test for vulnerability. It is true that hackers could use this guide to break into computing networks, but I ask that no one does use this file for illegal purposes. I will not be responsible for people misusing this information for illegal activities.

This guide describes the basic hacking methods. It does not cover all methods of hacking, but later writers may add further methods in the future. The latest version of this guide is always avaliable from hakkerz.home.ml.org. You are free to distribute this page on your site, all I ask is that you leave this notice here and place a link to hakkerz.home.ml.org on your site


This guide is in early Beta, and starts at the absolute basics. If you like it, I'll try to keep adding sections. If you know your stuff, send me sections you've written yourself and I'll add them in.


choices.GIF (538 bytes)

[I Want to Start at the Start]

[I Want to Go Straight to Hacking]
















Do not scroll!
Click the links to move!















A little background is needed before we get into hacking techniques.

When we talk about ‘Hacking’, we are talking about getting some access on a server we shouldn’t have. Servers are set up so that many people can use them. These people each have different ‘accounts’ on the server – like different directories that belong just to them. If Fred has an account with the ISP (Internet Service Provider), he will be given:

(1) a login name, which is like the name of your directory; and
(2) a password, which lets you get access to that directory.

This login name and password will usually give you access to all of Fred’s services - his mail, news services and web pages. There is also the ‘root’ account, which has it’s own login and password. This gives super-user access to the entire server. We will focus on ‘getting root’, in this help file.


 choices.GIF (538 bytes)

[Ok, I want to move to the 'anatomy of the hack']

[I know all this, let me move straight to hacking]

[I don't have a clue what you're on about, let me read some background
on this so called "Internet" you keep referring to




















There are two main ways to break into a system. Think of a server as a Swiss Bank Vault. There are two main ways to get in. You can try to get in by finding the combination of the vault. This is like finding the password. It’s how you are meant to get in. The second way is by using dynamite. You forget all about the ‘proper’ way to get in. This is like using ‘exploits’, or weaknesses in the servers operating system to gain access.


choices.GIF (538 bytes)

[Ok, Let's Go. Tell Me About Not Getting Caught]

[Stuff it, I know how to not get caught, on to the techniques!]















Hacking is illegal, and it is very easy to trace you if realizes you hacked them. Wherever you go,
your IP number (your computer’s unique identification) is left and often logged. Solutions:

1. When you set up your account with an ISP, give a false name and address.


choices.GIF (538 bytes)

[Nah, I can't be bothered, what other things can I do?]

[Ok, I used this trick. What else can I do?]

[Stuff it, I know how to not get caught, on to the techniques!]














2. Hack using a filched account (stolen password, etc.). A tool called Dripper can steal passwords for you from public net cafes and libraries.


choices.GIF (538 bytes)

[Nah, just tell me something easy I can do right now]

[Ok, done. Anything else I should do?]












3. Port your connection through something else.

An easy way to do this is to change your proxy settings. By using the proxy settings meant for a different ISP, it can look like you are surfing from wherever that ISP is. A list of proxies you can use is here.

You should also do any important info gathering through the IP Jamming Applet on the Hacker's Homepage to hide your IP.

If you want super anonymity, you should be surfing in an account you set up under a false name, with your proxy settings changed, and also surfing through the IP Jamming applet! Be aware that some ISPs could use Caller ID to test the number of someone logging on. Dial the relevant code to disable Caller ID before calling your ISP.

choices.GIF (538 bytes)

[I don't understand about the proxy settings thing, let me read more]

[Ok, I am wired for hyper stealth... Now, I want to HACK!]












To start off, you will probably need to gather information about www.
using internet tools.


choices.GIF (538 bytes)

[Ok, how?]

[Give me some reading to do about info gathering]

[No, I've already got all the info, just tell me what to do]










We are now taking the first steps of any hack... Info Gathering.

You should be set up for stealth mode. Get a notepad, and open a new browser window (through the IP Jammer). Bring the www.'s web page up in the IP Jammer's window. You can load the IP Jamming applet on the Hacker's Homepage.


choices.GIF (538 bytes)

[Ok, What Now?]












1. First, check out the site. Take down any email addresses, copy down the HTML of important pages.


choices.GIF (538 bytes)

[Done... What Else?]











2. Send a mail that will bounce to the site. If the site is www. , send a mail to blahblahblah@ . It will bounce back to you and give you information in its header.

Copy the information from the headers down.

(To maintain anonymity, it might be a good idea to send and receive the mail from a free web based provider, such as hotmail.com. Use full stealth features when sending the bouncing mail. This will protect you when they check through the logs after they are hacked.)


choices.GIF (538 bytes)

[Done... What Else?]













3. Still using stealth features, Traceroute . This Traceroute search is avaliable from the Hacker's Home Page, in the Net Tools section.

This will tell you the upstream provider of the victim server.





choices.GIF (538 bytes)

[Ok, what next?]
















3. Still using stealth features, Whois the site. This Whois search is avaliable from the Hacker's Home Page, in the Net Tools section.

This will give you information on the owners and servers that run the site. Write it down.





choices.GIF (538 bytes)

[Ok, what next?]


















4.  Finger the site. Use this finger service from the Hacker's Homepage to check the site. Try fingering just with “finger @ ” first. This sometimes tells you the names of all accounts. If this does not work, try fingering any email addresses you found on the site, and through Whois. This will sometimes give you useful information.





choices.GIF (538 bytes)

[Ok, what next?]
















5. Now, we're about to get rough on the site. Port Scan the site.

Port scanning checks for all open ports for an IP. It is extremely useful, however, it practially screams to the webmaster's of the victim site that they are in the middle of being hacked. The is basically no legitimate reason to port scan a site unless you are about to hack it.

There are no very good ways to hide a port scan, but there are a few semi-stealthy port scanners. Most are only for Linux / Unix systems. However, the Exploit Generator for Windows is one that claims to be stealthy. However, if you are trying to enter a very secure site, perhaps forget about port scanning for now, unless you are running Linux.

Though, port scan will tell you all the services a site is running. If port 21 is open, it means they have an FTP server. If port 23 is open, it means they have telnet.


choices.GIF (538 bytes)

[Ok, What next?]














5. The aim of telnetting to the site is basically to try and find out the server type. While your browser is in stealth mode, use the Anonymous Telnet applet in the Hacker's Homepage to open a Telnet window.

Telnet to the site to Port 23. Usually, if the address is “www. ”, try telnetting to " ". If this does not work, try to telnet to telnet. or try telnetting to any of the sites listed as name servers in your previous Whois search. Once you have got access, note any information it gives you, such as server type.



choices.GIF (538 bytes)

[This worked - I got the server type!]

[None of that worked...]
















Now change the telnet to port 21. This should send you straight in to the server's FTP port. If this works, try typing SYST to find out what server type it is.




choices.GIF (538 bytes)

[This worked - I got the server type!]

[None of that worked...]
















Now, if you are lucky, try telnetting to port 80, the HTTP port. Note if this gives you any information.


choices.GIF (538 bytes)

[This worked - I got the server type!]

[None of that worked...]















You *need* to know the server type to have any hope of hacking the thing. How do you expect to run exploits against it if you cant even figure out what you're dealing with here?

A final resort is to run a program called Whats Running? It doesn't work very well, but will sometimes tell you the server type. It will also probably be logged by the victim server.

If that doesn't work, do anything to find the server type. Even write them an e-mail asking  what operating system  they're running.


choices.GIF (538 bytes)

[Ok, I've got the Info... Now I want access!]

















We will now try to go through the front door of the server. As to our analogy, we are trying to find the combination of the safe.


choices.GIF (538 bytes)

[Ok, I Want Root!]

[Nah, I already know this server will need exploits]














You would kick yourselves if ya spent weeks trying advanced hacking with exploits, IP spoofing and social
engineering, just to find that we could have got in by using:

$Login: root
$Password: root

So, let’s just try this first and get it out of the way. Unix comes set up with some default passwords, and
sometimes these are not changed. So, we telnet to .

Don’t use your usual telnet program. Unless you are using a filched or anonymous account, it will show
your IP address to . With your proxies changed, and everything set for stealth, switch back to the Anonymous Telnet window.

Then try the following accounts and passwords:

(login) root: (password)root
sys: sys / system / bin
bin: sys / bin
mountfsys: mountfsys
adm: adm
uucp: uucp
nuucp: anon
anon: anon
user: user
games: games
install: install
demo: demo
umountfsys: umountfsys
sync: sync
admin: admin
guest: guest
daemon: daemon

The accounts root, mountfsys, umountfsys, install, and sometimes sync are root level accounts, meaning they have sysop power, or total power. Other logins are just "user level" logins meaning they only have power
over what files/processes they own.


choices.GIF (538 bytes)

[Nup... Didn't think it would work]

[Incredible... That Lame Trick Actually Worked!]


















Still simple things first. About 1 in 20 people are stupid enough to have the same login name and password. With your list of all the email addresses or finger information you dug from the site, try this.

For example, if the web site made a reference to fred@ , try logging in (through telnet or a FTP
program to their server) as:

$Login: Fred
$Password: Fred

Do this with all the names you have found - you might get lucky.

Did this work?


choices.GIF (538 bytes)

[Nah, they had some baddass security, didn't work]

[Oh, Golly Gee... I got access to one of the accounts!]


















You probably had no luck until now. Actually, most hacking techniques only have a slim chance of success. You just try hundreds of slim chances till you get it.

Assuming you were trying to log in on a Unix system, you may have been wondering how Unix checks to see whether the passwords you gave were correct or not. There is a file called ‘passwd’ on each Unix system which has all the passwords for each user. So, if we can’t guess the passwords, we will now try to rip this file and decrypt it.


choices.GIF (538 bytes)

[Make it so, Number 1]



















Your browser should be set to use the fake proxies. We will keep using this browser to FTP, because it cannot be easily traced, whereas something like CuteFTP can be traced to you because it can't use proxies. If in your port scan, you found an opne port 21, its a pretty good indication that they run an FTP server.

Using your stealth browser, try to FTP to . Example: ftp://

If that does not work, try to FTP to ftp. . Example: ftp://ftp.

If that does not work, try to FTP to the Domain Name Servers listed when you did your WHOIS search. Example: ftp://ns1.


choices.GIF (538 bytes)

[Ok, I'm In]

[Nah, stupid thing won't let me in]































Now you are connected to ’s FTP server, click on their \etc directory.

You should see a file called ‘passwd’ and maybe a file called ‘group’. Download the ‘passwd’ file, and
look at it.

If it looks like this when you open it, you are in luck:

admin:rYsKMjnvRppro:100:11:WWW administrator:/home/Common/WWW:/bin/csh

For example, we know a login is “kangaroo” and their encrypted password is “3A62i9qr”. Note - this is not their password, but an encrypted form of their password.

Or, did it look more like this:

admin:*:100:11:WWW administrator:/home/Common/WWW:/bin/csh
kangaroo:*:1012:10:Hisaharu TANAKA:/home/user/kangaroo:/usr/local/bin/tcsh

Is the second, encrypted password, section replaced by *’s or x’s? This is bad – it is called a shadowed
password and cannot be decrypted. This is how most passwd files are now days. However, if you got a
passwd file which has some non-shadowed entries, you can put your hand to decrypting it.

choices.GIF (538 bytes)

[Nah, It was all shadowed]

[Nah, couldn't find the passwd file in the first place]

[Yes! I think I got some non-shadowed passwords]



















There are a few programs around which were written to decrypt Unix passwd files. The most famous one was called ‘Cracker Jack’. Many ‘hacking’ texts strongly recommend this file – but they are mostly talking rubbish. Its old and most systems will just crash when they try to run it, as it uses weird memory allocation.

The best Unix cracker around is currently called 'John the Ripper 1.5’. It is readily avaliable. It was only written in the last year or so, and is a lot faster than Cracker Jack ever was. John the Ripper was also designed with Pentiums in mind, and the brute force techique used is genius. But you have to go down to DOS to use it.

You will also need a large ‘wordfile’, with every English word. Bigger the better. The Crack Programs test every word in the wordfile against the passwd file. If the wordfile is big enough, you have a good chance of getting a password.


choices.GIF (538 bytes)

[Yes! I Got Me Some Decrypted Passwords!]

[Nah, the Encryption was too Good]

[Give me some reading about all the different password crackers, where to find them, etc.]

















Although most servers have now trashed a program called PHF, let's just make sure... It is is working, it lets you get the passwd file remotely, even if it is inside hidden and root access only directories.

In the Overlord Anonymizer, type:




If PHF is active (often not), this string will print out the etc/passwd file strait to your web browser all you need to do is save it as a file and again run a crack program against it.

Now, if you see the words 'Smile! You're on Candid Camera!', it means that the server is protected against this hack, and has logged your IP. But don't worry. So long as you were using the anonymizer, you are safe.


choices.GIF (538 bytes)

[Nah, they fixed that PHF Bug Problem]

[Yes! I Got Me Some Encrypted Passwords!]









Finger servers are hacker's friends. Let's find out whether www. has a finger server.

In the Anonymizer, assuming that the server's name starts with www, type www./cgi-bin/finger


If the finger gateway is operational a box should appear for you to enter the name you want to finger. If it is operational you have another chance to receive the etc/passwd file.

Okay, 1/ get your list of e-mail addresses you found for the site (let's pretend one of them is "kangaroo@", and that your email address is "your@email.org")

2/ Go back to the finger box, and type this in (changing these email addresses for the real ones):

kangaroo@; /bin/mail your@email.org < etc/passwd

This takes the passwd file through kangaroo@ and emails it to your email address. If this works you now have the etc/passwd file in your mailbox.... you can now run a crack program against it and have a little fun on their box.


choices.GIF (538 bytes)

[Nah, it didn't work]

[Yes! I Got Me Some Encrypted Passwords!]















You have reached the current limit of the tutorials.... I will add further steps when I get the time and if people like these lessons. Also, if people want to write sections up for this, just mail the sections to me, to the e-mail address listed on the Hacker's Homepage.

Until this gets bigger, I can suggest a few books that teach hacking. I've found that a lot of books are rubbish and just teach how to change screen colours, but there are a few that every hacker should have in their library.




1. MAXIMUM SECURITY: Of course, Maximum Security has to be at number one. I guess this would probably be the central book in any hacker's library. Goes through a heap of techniques like a textbook with over 900 pages.

2. THE HAPPY HACKER:   Essential for newbies. Although this book is bagged a lot by people who hate Carolyn, I think most people agree it would be the perfect first book a newbie should read. Explains things pretty well, spelling mistakes, but probably an essential newbie primer. Thou, as I say, if you know your stuff you can safely forget this one.



1. LINUX: You will need to change to Linux to do any serious hacking. But thankfully, it is fairly simple and you can just set up Linux in a seperate partition on your Hard Drive and set for a dual boot option: usually windows, and when you are hacking, Linux. You amount of 'Net Power' increases 500%. If you want to buy Linux, make sure you get the latest version not an obsolete one. There are also several different 'flavours' of Linux, you will probably want to start with RedHat, then possibly move to Slackware after a year or so. So, make sure you get a deal which gives you the oportunity to check out some of the different distributions. By far the best Linux deal I've found around is this one. It has an excellent Linux manual, and comes with three seperate Linux distributions on 3 CDs, including the very latest RedHat and Slackware. It's also excellent value (about half the price of buying the single 'official' RedHat CD).


(These books are mainly just part of the Hacker Culture)

3. THE WATCHMAN: The Twisted Life and Crimes of Serial Hacker Kevin Poulsen: This one will not teach you anything, so stuff it if you just want to learn. Although it was one of the best reads I ever had. More like a thriller book, but it was real! The Kevin Mitnick books are about the same, but this one deals a lot with phreaking, and scamming radio stations of cars. But, as I say, it doesn't go through any techniques, so stuff it if you just want to learn stuff.

4. THE FUGITIVE GAME: Online With Kevin Mitnick: Again, a really fun read (though, I prefer the Poulsen book) but it doesn't go through any hacking techniques. But I have to list it here because it is such a good read. It's also a really cheap buy.

5. TAKEDOWN : The Pursuit and Capture of Kevin Mitnick: This is the other side of the Mitnick story (written by the cops who chased him). Interesting, but the essential Mitnick book is the one above. Though, this is a very good primer on how the FBI operates to capture hackers. But, again, no techniqes listed. For techniques, you would only have luck in the first two books listed.


Okay, as for programming books - stuff it. You can download the things for free if you search for "perl + programming + tutorial" and things like that. Unless you like printed books, forget that. So, the only other thing is Linux. You will need to have Linux as a dual boot option on your PC if you want to do any serious hacking.

Some books that suck: these are some books that are going around that are a rip-off.  SECRETS OF A SUPER HACKER: This is another book that a lot of people have. The book seemed like a real waste of time to me.

So, keep going through this tutorial as it gets bigger, read anything you find on the web. Get some of the major books above, at least 1 and 2, and read them very carefully - four or five times. Join your local Linux users group, if you have one. And, later on, download a few guides on programming and read through them when you get some time. With some effort (it isn't easy), you can become a respected hacker and take control of the Net.



choices.GIF (538 bytes)

[Back to Index]















You have gained access.

If you now have the login code and password, you may use the users mail account, FTP priviliges (change their web pages by uploading new ones), and HTTP access.


(If you have only got access to a user level account, do not despair. If you have a user level account, it is easy to use that to later get a root level account. More on this when this study is made bigger).


 choices.GIF (538 bytes)

[Back to The Hacker's Homepage]



[Return to The Hacker's HomePage]





















Readers already familiar with the Internet's early development may wish to bypass this little slice of history. The story has been told many times.

Our setting is the early 1960s: 1962, to be exact. Jack Kennedy was in the White House, the Beatles had just recorded their first hit single (Love Me Do), and Christa Speck, a knock-out brunette from Germany, made Playmate of the Year. Most Americans were enjoying an era of prosperity. Elsewhere, however, Communism was spreading, and with it came weapons of terrible destruction.

In anticipation of impending atomic disaster, The United States Air Force charged a small group of researchers with a formidable task: creating a communication network that could survive a nuclear attack. Their concept was revolutionary: a network that had no centralized control. If 1 (or 10, or 100) of its nodes were destroyed, the system would continue to run. In essence, this network (designed exclusively for military use) would survive the apocalypse itself (even if we didn't).

The individual largely responsible for the creation of the Internet is Paul Baran. In 1962, Baran worked at RAND Corporation, the think tank charged with developing this concept. Baran's vision involved a network constructed much like a fishnet. In his now-famous memorandum titled On Distributed Communications: I. Introduction to Distributed Communications Network, Baran explained:

The centralized network is obviously vulnerable as destruction of a single central node destroys communication between the end stations. In practice, a mixture of star and mesh components is used to form communications networks. Such a network is sometimes called a `decentralized' network, because complete reliance upon a single point is not always required.

Cross Reference: The RAND Corporation has generously made this memorandum and the report delivered by Baran available via the World Wide Web. The documents can be found at http://www.rand.org/publications/electronic/.

Baran's model was complex. His presentation covered every aspect of the proposed network, including routing conventions. For example, data would travel along the network by whatever channels were available at that precise moment. In essence, the data would dynamically determine its own path at each step of the journey. If it encountered some sort of problem at one crossroads of the Net, the data would find an alternate route. Baran's proposed design provided for all sorts of contingencies. For instance, a network node would only accept a message if that node had adequate space available to store it. Equally, if a data message determined that all nodes were currently unavailable (the all lines busy scenario), the message would wait at the current node until a data path became available. In this way, the network would provide intelligent data transport. Baran also detailed other aspects of the network, including

- Security

- Priority schemes (and devices to avoid network overload)

- Hardware

- Cost


In essence, Baran eloquently articulated the birth of a network in painstaking detail. Unfortunately, however, his ideas were ahead of their time. The Pentagon had little faith in such radical concepts. Baran delivered to defense officials an 11-volume report that was promptly shelved.

The Pentagon's shortsightedness delayed the birth of the Internet, but not by much. By 1965, the push was on again. Funding was allocated for the development of a decentralized computer network, and in 1969, that network became a reality. That system was called ARPANET.

As networks go, ARPANET was pretty basic, not even closely resembling the Internet of today. Its topology consisted of links between machines at four academic institutions (Stanford Research Institute, the University of Utah, the University of California at Los Angeles, and the University of California at Santa Barbara).

One of those machines was a DEC PDP-10. Only those more mature readers will remember this model. These are massive, ancient beasts, now more useful as furniture than computing devices. I mention the PDP-10 here to briefly recount another legend in computer history (one that many of you have never heard). By taking this detour, I hope to give you a frame of reference from which to measure how incredibly long ago this was in computer history.

It was at roughly that time that a Seattle, Washington, company began providing computer time sharing. The company reportedly took on two bright young men to test its software. These young men both excelled in computer science, and were rumored to be skilled in the art of finding holes within systems. In exchange for testing company software, the young men were given free dial-up access to a PDP-10 (this would be the equivalent of getting free access to a private bulletin board system). Unfortunately for the boys, the company folded shortly thereafter, but the learning experience changed their lives. At the time, they were just old enough to attend high school. Today, they are in their forties. Can you guess their identities? The two boys were Bill Gates and Paul Allen.

In any event, by 1972, ARPANET had some 40 hosts (in today's terms, that is smaller than many local area networks, or LANs). It was in that year that Ray Tomlinson, a member of Bolt, Beranek, and Newman, Inc., forever changed the mode of communication on the network. Tomlinson created electronic mail.

Tomlinson's invention was probably the single most important computer innovation of the decade. E-mail allowed simple, efficient, and inexpensive communication between various nodes of the network. This naturally led to more active discussions and the open exchange of ideas. Because many recipients could be added to an e-mail message, these ideas were more rapidly implemented. (Consider the distinction between e-mail and the telephone. How many people can you reach with a modern conference call? Compare that to the number of people you can reach with a single e-mail message. For group-oriented research, e-mail cannot be rivaled.) From that point on, the Net was alive.

In 1974, Tomlinson contributed to another startling advance. He (in parallel with Vinton Cerf and Robert Khan) invented the Transmission Control Protocol (TCP). This protocol was a new means of moving data across the network bit by bit and then later assembling these fragments at the other end.

NOTE: TCP is the primary protocol used on the Internet today. It was developed in the early 1970s and was ultimately integrated into Berkeley Software Distribution UNIX. It has since become an Internet standard. Today, almost all computers connected to the Internet run some form of TCP.

By 1975, ARPANET was a fully functional network. The groundwork had been done and it was time for the U.S. government to claim its prize. In that year, control of ARPANET was given to an organization then known as the United States Defense Communications Agency (this organization would later become the Defense Information Systems Agency).

To date, the Internet is the largest and most comprehensive structure ever designed by humankind. Next, I will address some peripheral technological developments that helped form the network and bring it to its present state of complexity. To do this, I will start with C.

What Is C?

C is a popular computer programming language, often used to write language compilers and operating systems. I examine C here because its development (and its relationship to the UNIX operating system) is directly relevant to the Internet's development.

Nearly all applications designed to facilitate communication over the Internet are written in C. Indeed, both the UNIX operating system (which forms the underlying structure of the Internet) and TCP/IP (the suite of protocols used to traffic data over the Net) were developed in C. It is no exaggeration to say that if C had never emerged, the Internet as we know it would never have existed at all.

For most non-technical users, programming languages are strange, perplexing things. However, programming languages (and programmers) are the very tools by which a computer program (commonly called an application) is constructed. It may interest you to know that if you use a personal computer or workstation, better than half of all applications you now use were written in the C language. (This is true of all widely used platforms, including Macintosh.) In this section, I want to briefly discuss C and pay some homage to those who helped develop it. These folks, along with Paul Baran, Ken Thompson, and a handful of others, are the grandparents of the Internet.

C was created in the early 1970s by Dennis M. Ritchie and Brian W. Kernighan. These two men are responsible for many technological advancements that formed the modern Internet, and their names appear several times throughout this book.

Let's discuss a few basic characteristics of the C programming language. To start, C is a compiled as opposed to an interpreted language. I want to take a moment to explain this critical distinction because many of you may lack programming experience.

Interpreted Programming Languages

Most programs are written in plain, human-readable text. This text is made up of various commands and blocks of programming code called functions. In interpreted languages, this text remains in human-readable form. In other words, such a program file can be loaded into a text editor and read without event.

For instance, examine the program that follows. It is written for the Practical Extraction and Report Language (Perl). The purpose of this Perl program is to get the user's first name and print it back out to the screen.

NOTE: Perl is strictly defined as an interpreted language, but it does perform a form of compilation. However, that compilation occurs in memory and never actually changes the physical appearance of the programming code.

This program is written in plain English:

print "Please enter your first name:";
$user_firstname = <STDIN>;
print "Hello, $user_firstname\n"
print "Are you ready to hack?\n"

Its construction is designed to be interpreted by Perl. The program performs five functions:


Interpreted languages are commonly used for programs that perform trivial tasks or tasks that need be done only once. These are sometimes referred to as throwaway programs. They can be written quickly and take virtually no room on the local disk.

Such interpreted programs are of limited use. For example, in order to run, they must be executed on a machine that contains the command interpreter. If you take a Perl script and install it on a DOS-based machine (without first installing the Perl interpreter), it will not run. The user will be confronted with an error message (Bad command or file name). Thus, programs written in Perl are dependent on the interpreter for execution.

Microsoft users will be vaguely familiar with this concept in the context of applications written in Visual Basic (VB). VB programs typically rely on runtime libraries such as VBRUN400.DLL. Without such libraries present on the drive, VB programs will not run.

Cross Reference: Microsoft users who want to learn more about such library dependencies (but don't want to spend the money for VB) should check out Envelop. Envelop is a completely free 32-bit programming environment for Windows 95 and Windows NT. It very closely resembles Microsoft Visual Basic and generates attractive, fully functional 32-bit programs. It, too, has a set of runtime libraries and extensive documentation about how those libraries interface with the program. You can get it at ftp://ftp.cso.uiuc.edu/pub/systems/pc/winsite/win95/programr/envlp14.exe

The key advantages of interpreted languages include


Interpreted languages are popular, particularly in the UNIX community. Here is a brief list of some well-known interpreted languages:


The pitfall of using an interpreted language is that programs written in interpreted languages are generally much slower than those written in compiled languages.

Compiled Languages

Compiled languages (such as C) are much different. Programs written in compiled languages must be converted into binary format before they can be executed. In many instances, this format is almost pure machine-readable code. To generate this code, the programmer sends the human-readable program code (plain text) through a compilation process. The program that performs this conversion is called a compiler.

After the program has been compiled, no interpreter is required for its execution. It will run on any machine that runs the target operating system for which the program was written. Exceptions to this rule may sometimes apply to certain portions of a compiled program. For example, certain graphical functions are dependent on proprietary graphics libraries. When a C program is written using such graphical libraries, certain library components must be shipped with the binary distribution. If such library components are missing when the program is executed, the program will exit on error.

The first interesting point about compiled programs is that they are fast. Because the program is loaded entirely into memory on execution (as opposed to being interpreted first), a great deal of speed is gained. However, as the saying goes, there is no such thing as a free lunch. Thus, although compiled programs are fast, they are also much larger than programs written in interpreted languages.

Examine following the C program. It is identical in function to the Perl program listed previously. Here is the code in its yet-to-be-compiled state:

#include <stdio.h>
int main()
char name[20];
printf("Please enter your first name:   ");
scanf("%s", &name);
printf("Hello, %s\n", name);
printf("Are you ready to hack?\n");
return 0;

Using a standard C compiler, I compiled this code in a UNIX operating system environment. The difference in size between the two programs (the one in Perl and the one in C) was dramatic. The Perl program was 150 bytes in size; the C program, after being compiled, was 4141 bytes.

This might seem like a huge liability on the part of C, but in reality, it isn't. The C program can be ported to almost every operating system. Furthermore, it will run on any operating system of a certain class. If compiled for DOS, it will work equally well under all DOS-like environments (such as PC-DOS or NDOS), not just Microsoft DOS.

Modern C: The All-Purpose Language

C has been used over the years to create all manner of programs on a variety of platforms. Many Microsoft Windows applications have been written in C. Similarly, as I will explain later in this chapter, nearly all basic UNIX utilities are written in C.

To generate programs written in C, you must have a C compiler. C compilers are available for most platforms. Some of these are commercial products and some are free to the public. Table 7.1 lists common C compilers and the platforms on which they are available.

Table 7.1. C compilers and their platforms.


Compiler Platform
GNU C (free) UNIX, Linux, DOS, VAX
Borland C DOS, Windows, Windows NT
Microsoft C DOS, Windows, Windows NT
Watcom C DOS, Windows, Windows NT, OS/2
Metrowerks CodeWarrior Mac, Windows, BeOS
Symantec Macintosh, Microsoft platforms

Advantages of C

One primary advantage of the C language is that it is smaller than many other languages. The average individual can learn C within a reasonable period of time. Another advantage is that C now conforms to a national standard. Thus, a programmer can learn C and apply that knowledge on any platform, anywhere in the country.

C has direct relevance to the development of the Internet. As I have explained, most modern TCP/IP implementations are written in C, and these form the basis of data transport on the Internet. More importantly, C was used in the development of the UNIX operating system. As I will explain in the next section of this chapter, the UNIX operating system has, for many years, formed the larger portion of the Internet.

C has other advantages: One is portability. You may have seen statements on the Internet about this or that program being ported to another operating system or platform, and many of you might not know exactly what that means. Portability refers to the capability of a program to be re-worked to run on a platform other than the one for which it was originally designed (that is, the capability to take a program written for Microsoft Windows and port it to the Macintosh platform). This aspect of portability is very important, especially in an environment like the Internet, because the Internet has many different types of systems running on it. In order to make a program available networkwide, that program must be easily conformable to all platforms.

Unlike code in other languages, C code is highly portable. For example, consider Visual Basic. Visual Basic is a wonderful rapid application development tool that can build programs to run on any Microsoft-based platform. However, that is the extent of it. You cannot take the raw code of a VB application and recompile it on a Macintosh or a Sun Sparcstation.

In contrast, the majority of C programs can be ported to a wide variety of platforms. As such, C-based programs available for distribution on the Internet are almost always distributed in source form (in other words, they are distributed in plain text code form, or in a form that has not yet been compiled). This allows the user to compile the program specifically for his or her own operating system environment.

Limitations of C and the Creation of C++

Despite these wonderful features, C has certain limitations. C is not, for example, an object-oriented language. Managing very large programs in C (where the code exceeds 100,000 lines) can be difficult. For this, C++ was created. C++ lineage is deeply rooted in C, but works differently. Because this section contains only brief coverage of C, I will not discuss C++ extensively. However, you should note that C++ is generally included as an option in most modern C compilers.

C++ is an extremely powerful programming language and has led to dramatic changes in the way programming is accomplished. C++ allows for encapsulation of complex functions into entities called objects. These objects allow easier control and organization of large and complex programs.

In closing, C is a popular, portable, and lightweight programming language. It is based on a national standard and was used in the development of the UNIX operating system.

Cross Reference: Readers who want to learn more about the C programming language should obtain the book The C Programming Language by Brian W. Kernighan and Dennis M. Ritchie. (Prentice Hall, ISBN 0-13-110370-9). This book is a standard. It is extremely revealing; after all, it is written by two men who developed the language.

Other popular books on C include

C: A Reference Manual. Samuel P. Harbison and Guy L. Steele. Prentice-Hall. ISBN 0-13-109802-0. 1987.

Teach Yourself C in 21 Days. Peter Aitkin and Bradley Jones. Sams Publishing. ISBN 0-672-30448-1.

Teach Yourself C. Herbert Schildt. Osborne McGraw-Hill. ISBN 0-07-881596-7.


The UNIX operating system has a long and rich history. Today, UNIX is one of the most widely used operating systems, particularly on the Internet. In fact, UNIX actually comprises much of the Net, being the number one system used on servers in the void.

Created in 1969 by Ken Thompson of Bell Labs, the first version of UNIX ran on a Digital Equipment Corporation (DEC) PDP-7. Of course, that system bore no resemblance to modern UNIX. For example, UNIX has been traditionally known as a multiuser system (in other words, many users can work simultaneously on a single UNIX box). In contrast, the system created by Thompson was reportedly a single-user system, and a bare bones one at that.

When users today think of an operating system, they imagine something that includes basic utilities, text editors, help files, a windowing system, networking tools, and so forth. This is because the personal computer has become a household item. As such, end-user systems incorporate great complexity and user-friendly design. Alas, the first UNIX system was nothing like this. Instead, it was composed of only the most necessary utilities to operate effectively. For a moment, place yourself in Ken Thompson's position. Before you create dozens of complex programs like those mentioned previously, you are faced with a more practical task: getting the system to boot.

In any event, Thompson and Dennis Ritchie ported UNIX to a DEC PDP-11/20 a year later. From there, UNIX underwent considerable development. Between 1970 and 1973, UNIX was completely reworked and written in the C programming language. This was reportedly a major improvement and eliminated many of the bugs inherent to the original implementation.

In the years that followed, UNIX source code was distributed to universities throughout the country. This, more than any other thing, contributed to the success of UNIX.

First, the research and academic communities took an immediate liking to UNIX. Hence, it was used in many educational exercises. This had a direct effect on the commercial world. As explained by Mike Loukides, an editor for O'Reilly & Associates and a UNIX guru:

Schools were turning out loads of very competent computer users (and systems programmers) who already knew UNIX. You could therefore "buy" a ready-made programming staff. You didn't have to train them on the intricacies of some unknown operating system.

Also, because the source was free to these universities, UNIX was open for development by students. This openness quickly led to UNIX being ported to other machines, which only increased the UNIX user base.

NOTE: Because UNIX source is widely known and available, more flaws in the system security structure are also known. This is in sharp contrast to proprietary systems. Such proprietary software manufacturers refuse to disclose their source except to very select recipients, leaving many questions about their security as yet unanswered.

Several years passed, and UNIX continued to gain popularity. It became so popular, in fact, that in 1978, AT&T decided to commercialize the operating system and demand licensing fees (after all, it had obviously created a winning product). This caused a major shift in the computing community. As a result, the University of California at Berkeley created its own version of UNIX, thereafter referred to as the Berkeley Software Distribution or BSD. BSD was (and continues to be) extremely influential, being the basis for many modern forms of commercial UNIX.

An interesting development occurred during 1980. Microsoft released a new version of UNIX called XENIX. This was significant because the Microsoft product line was already quite extensive. For example, Microsoft was selling versions of BASIC, COBOL, Pascal, and FORTRAN. However, despite a strong effort by Microsoft to make its XENIX product fly (and even an endorsement by IBM to install the XENIX operating system on its new PCs), XENIX would ultimately fade into obscurity. Its popularity lasted a mere five years. In contrast, MS-DOS (released only one year after XENIX was introduced) took the PC world by storm.

Today, there are many commercial versions of UNIX. I have listed a few of the them in Table 7.2.

Table 7.2. Commercial versions of UNIX and their manufacturers.


UNIX Version Software Company
SunOS & Solaris Sun Microsystems
HP-UX Hewlett Packard
IRIX Silicon Graphics (SGI)
DEC UNIX Digital Equipment Corporation

These versions of UNIX run on proprietary hardware platforms, on high-performance machines called workstations. Workstations differ from PC machines in several ways. For one thing, workstations contain superior hardware and are therefore more expensive. This is due in part to the limited number of workstations built. PCs are manufactured in large numbers, and manufacturers are constantly looking for ways to cut costs. A consumer buying a new PC motherboard has a much greater chance of receiving faulty hardware. Conversely, workstation buyers enjoy more reliability, but may pay five or even six figures for their systems.

The trade-off is a hard choice. Naturally, for average users, workstations are both impractical and cost prohibitive. Moreover, PC hardware and software are easily obtainable, simple to configure, and widely distributed.

Nevertheless, workstations have traditionally been more technologically advanced than PCs. For example, onboard sound, Ethernet, and SCSI were standard features of workstations in 1989. In fact, onboard ISDN was integrated not long after ISDN was developed.

Differences also exist depending upon manufacturer. For example, Silicon Graphics (SGI) machines contain special hardware (and software) that allows them to generate eye-popping graphics. These machines are commonly used in the entertainment industry, particularly in film. Because of the extraordinary capabilities of the SGI product line, SGI workstations are unrivaled in the graphics industry.

However, we are only concerned here with the UNIX platform as it relates to the Internet. As you might guess, that relationship is strong. As I noted earlier, the U.S. government's development of the Internet was implemented on the UNIX platform. As such, today's UNIX system contains within it the very building blocks of the Net. No other operating system had ever been so expressly designed for use with the Internet. (Although Bell Labs is currently developing a system that may even surpass UNIX in this regard.)

Modern UNIX can run on a wide variety of platforms, including IBM-compatible and Macintosh. Installation is typically straightforward and differs little from installation of other operating systems. Most vendors provide CD-ROM media. On workstations, installation is performed by booting from a CD-ROM. The user is given a series of options and the remainder of the installation is automatic. On other hardware platforms, the CD-ROM medium is generally accompanied by a boot disk that loads a small installation routine into memory.

Likewise, starting a UNIX system is similar to booting other systems. The boot routine makes quick diagnostics of all existing hardware devices, checks the memory, and starts vital system processes. In UNIX, some common system processes started at boot include


After the system boots successfully, a login prompt is issued to the user. Here, the user provides his or her login username and password. When login is complete, the user is generally dropped into a shell environment. A shell is an environment in which commands can be typed and executed. In this respect, at least in appearance, basic UNIX marginally resembles MS-DOS. Navigation of directories is accomplished by changing direction from one to another. DOS users can easily navigate a UNIX system using the conversion information in Table 7.3.

Table 7.3. Command conversion table: UNIX to DOS.


DOS Command UNIX Equivalent
cd <directory> cd <directory>
dir ls -l
type|more more
help <command> man <command>
edit vi

Cross Reference: Readers who wish to know more about basic UNIX commands should point their WWW browser to http://www.geek-girl.com/Unixhelp/. This archive is one of the most comprehensive collections of information about UNIX currently online.
Equally, more serious readers may wish to have a handy reference at their immediate disposal. For this, I recommend UNIX Unleashed (Sams Publishing). The book was written by several talented UNIX wizards and provides many helpful tips and tricks on using this popular operating system.

Say, What About a Windowing System?

UNIX supports many windowing systems. Much depends on the specific platform. For example, most companies that have developed proprietary UNIX systems have also developed their own windowing packages, either partially or completely. In general, however, all modern UNIX systems support the X Window System from the Massachusetts Institute of Technology (MIT). Whenever I refer to the X Window System in this book (which is often), I refer to it as X. I want to quickly cover X because some portions of this book require you to know about it.

In 1984, the folks at MIT founded Project Athena. Its purpose was to develop a system of graphical interface that would run on workstations or networks of disparate design. During the initial stages of research, it immediately became clear that in order to accomplish this task, X had to be hardware independent. It also had to provide transparent network access. As such, X is not only a windowing system, but a network protocol based on the client/server model.

The individuals primarily responsible for early development of X were Robert Scheifler and Ron Newman, both from MIT, and Jim Gettys of DEC. X vastly differs from other types of windowing systems (for example, Microsoft Windows), even with respect to the user interface. This difference lies mainly in a concept sometimes referred to as workbench or toolkit functionality. That is, X allows users to control every aspect of its behavior. It also provides an extensive set of programming resources. X has often been described as the most complex and comprehensive windowing system ever designed. X provides for high-resolution graphics over network connections at high speed and throughput. In short, X comprises some of the most advanced windowing technology currently available. Some users characterize the complexity of X as a disadvantage, and there is probably a bit of merit to this. So many options are available that the casual user may quickly be overwhelmed.

Cross Reference: Readers who wish to learn more about X should visit the site of the X Consortium. The X Consortium comprises the authors of X. This group constantly sets and improves standards for the X Window System. Its site is at http://www.x.org/.

NOTE: Certain versions of X can be run on IBM-compatible machines in a DOS/Windows Environment.

Users familiar with the Microsoft platform can grasp the use of X in UNIX by likening it to the relationship between DOS and Microsoft Windows 3.11. The basic UNIX system is always available as a command-line interface and remains active and accessible, even when the user enters the X environment. In this respect, X runs on top of the basic UNIX system. While in the X environment, a user can access the UNIX command-line interface through a shell window (this at least appears to function much like the MS-DOS prompt window option available in Microsoft Windows). From this shell window, the user can perform tasks, execute commands, and view system processes at work.

Users start the X Window System by issuing the following command:


X can run a series of window managers. Each window manager has a different look and feel. Some of these (such as twm) appear quite bare bones and technical, while others are quite attractive, even fancy. There is even one X window manager available that emulates the Windows 95 look and feel. Other platforms are likewise emulated, including the NeXT window system and the Amiga Workbench system. Other windowing systems (some based on X and some proprietary) are shown in Table 7.4.

Table 7.4. Common windowing systems in UNIX.


Window System Company
OpenWindows Sun Microsystems
AIXWindows IBM
HPVUE Hewlett Packard
Indigo Magic Silicon Graphics

What Kinds of Applications Run on UNIX?

Many types of applications run on UNIX. Some of these are high-performance applications for use in scientific research and artificial intelligence. I have already mentioned that certain high-level graphics applications are also common, particularly to the SGI platform. However, not every UNIX application is so specialized or eclectic. Perfectly normal applications run in UNIX, and many of them are recognizable names common to the PC and Mac communities (such as Adobe Photoshop, WordPerfect, and other front-line products).

Equally, I don't want readers to get the wrong idea. UNIX is by no means a platform that lacks a sense of humor or fun. Indeed, there are many games and amusing utilities available for this unique operating system.

Essentially, modern UNIX is much like any other platform in this respect. Window systems tend to come with suites of applications integrated into the package. These include file managers, text editors, mail tools, clocks, calendars, calculators, and the usual fare.

There is also a rich collection of multimedia software for use with UNIX, including movie players, audio CD utilities, recording facilities for digital sound, two-way camera systems, multimedia mail, and other fun things. Basically, just about anything you can think of has been written for UNIX.

UNIX in Relation to Internet Security

Because UNIX supports so many avenues of networking, securing UNIX servers is a formidable task. This is in contrast to servers implemented on the Macintosh or IBM-compatible platforms. The operating systems most common to these platforms do not support anywhere close to the number of network protocols natively available under UNIX.

Traditionally, UNIX security has been a complex field. In this respect, UNIX is often at odds with itself. UNIX was developed as the ultimate open system (that is, its source code has long been freely available, the system supports a wide range of protocols, and its design is uniquely oriented to facilitate multiple forms of communication). These attributes make UNIX the most popular networking platform ever devised. Nevertheless, these same attributes make security a difficult thing to achieve. How can you allow every manner of open access and fluid networking while still providing security?

Over the years, many advances have been made in UNIX security. These, in large part, were spawned by governmental use of the operating system. Most versions of UNIX have made it to the Evaluated Products List (EPL). Some of these advances (many of which were implemented early in the operating system's history) include


UNIX is used in many environments that demand security. As such, there are hundreds of security programs available to tune up or otherwise improve the security of a UNIX system. Many of these tools are freely available on the Internet. Such tools can be classified into two basic categories:


Security audit tools tend to be programs that automatically detect holes within systems. These typically check for known vulnerabilities and common misconfigurations that can lead to security breaches. Such tools are designed for wide-scale network auditing and, therefore, can be used to check many machines on a given network. These tools are advantageous because they reveal inherent weaknesses within the audited system. However, these tools are also liabilities because they provide powerful capabilities to crackers in the void. In the wrong hands, these tools can be used to compromise many hosts.

Conversely, system logging tools are used to record the activities of users and system messages. These logs are recorded to plain text files or files that automatically organize themselves into one or more database formats. Logging tools are a staple resource in any UNIX security toolbox. Often, the logs generated by such utilities form the basis of evidence when you pursue an intruder or build a case against a cracker. However, deep logging of the system can be costly in terms of disk space. Moreover, many of these tools work flawlessly at collecting data, but provide no easy way to interpret it. Thus, security personnel may be faced with writing their own programs to perform this task.

UNIX security is a far more difficult field than security on other platforms, primarily because UNIX is such a large and complicated operating system. Naturally, this means that obtaining personnel with true UNIX security expertise may be a laborious and costly process. For although these people aren't rare particularly, most of them already occupy key positions in firms throughout the nation. As a result, consulting in this area has become a lucrative business.

One good point about UNIX security is that because UNIX has been around for so long, much is known about its inherent flaws. Although new holes crop up on a fairly regular basis, their sources are quickly identified. Moreover, the UNIX community as a whole is well networked with respect to security. There are many mailing lists, archives, and online databases of information dealing with UNIX security. The same cannot be so easily said for other operating systems. Nevertheless, this trend is changing, particularly with regard to Microsoft Windows NT. There is now strong support for NT security on the Net, and that support is growing each day.

The Internet: How Big Is It?

This section requires a bit more history, and I am going to run through it rapidly. Early in the 1980s, the Internet as we now know it was born. The number of hosts was in the hundreds, and it seemed to researchers even then that the Internet was massive. Sometime in 1986, the first freely available public access server was established on the Net. It was only a matter of time--a mere decade, as it turned out--before humanity would storm the beach of cyberspace; it would soon come alive with the sounds of merchants peddling their wares.

By 1988, there were more than 50,000 hosts on the Net. Then a bizarre event took place: In November of that year, a worm program was released into the network. This worm infected numerous machines (reportedly over 5,000) and left them in various stages of disrupted service or distress. This brought the Internet into the public eye in a big way, plastering it across the front pages of our nation's newspapers.

By 1990, the number of Internet hosts exceeded 300,000. For a variety of reasons, the U.S. government released its hold on the network in this year, leaving it to the National Science Foundation (NSF). The NSF had instituted strong restrictions against commercial use of the Internet. However, amidst debates over cost considerations (operating the Internet backbone required substantial resources), NSF suddenly relinquished authority over the Net in 1991, opening the way for commercial entities to seize control of network bandwidth.

Still, however, the public at large did not advance. The majority of private Internet users got their access from providers like Delphi. Access was entirely command-line based and far too intimidating for the average user. This changed suddenly when revolutionary software developed at the University of Minnesota was released. It was called Gopher. Gopher was the first Internet navigation tool for use in GUI environments. The World Wide Web browser followed soon thereafter.

In 1995, NSF retired entirely from its long-standing position as overseer of the Net. The Internet was completely commercialized almost instantly as companies across America rushed to get connected to the backbone. The companies were immediately followed by the American public, which was empowered by new browsers such as NCSA Mosaic, Netscape Navigator, and Microsoft Internet Explorer. The Internet was suddenly accessible to anyone with a computer, a windowing system, and a mouse.

Today, the Internet sports more than 10 million hosts and reportedly serves some 40 million individuals. Some projections indicate that if Internet usage continues along its current path of growth, the entire Western world will be connected by the year 2001. Barring some extraordinary event to slow this path, these estimates are probably correct.

Today's Internet is truly massive, housing hundreds of thousands of networks. Many of these run varied operating systems and hardware platforms. Well over 100 countries besides the United States are connected, and that number is increasing every year. The only question is this: What does the future hold for the Internet?

The Future

There have been many projections about where the Internet is going. Most of these projections (at least those of common knowledge to the public) are cast by marketeers and spin doctors anxious to sell more bandwidth, more hardware, more software, and more hype. In essence, America's icons of big business are trying to control the Net and bend it to their will. This is a formidable task for several reasons.

One is that the technology for the Internet is now moving faster than the public's ability to buy it. For example, much of corporate America is intent on using the Internet as an entertainment medium. The network is well suited for such purposes, but implementation is difficult, primarily because average users cannot afford the necessary hardware to receive high-speed transmissions. Most users are getting along with modems at speeds of 28.8Kbps. Other options exist, true, but they are expensive. ISDN, for example, is a viable solution only for folks with funds to spare or for companies doing business on the Net. It is also of some significance that ISDN is more difficult to configure--on any platform--than the average modem. For some of my clients, this has been a significant deterrent. I occasionally hear from people who turned to ISDN, found the configuration problems overwhelming, and found themselves back at 28.8Kbps with conventional modems. Furthermore, in certain parts of the country, the mere use of an ISDN telephone line costs money per each minute of connection time.

NOTE: Although telephone companies initially viewed ISDN as a big money maker, that projection proved to be somewhat premature. These companies envisioned huge profits, which never really materialized. There are many reasons for this. One is that ISDN modems are still very expensive compared to their 28.8Kbps counterparts. This is a significant deterrent to most casual users. Another reason is that consumers know they can avoid heavy-duty phone company charges by surfing at night. (For example, many telephone companies only enforce heavy charges from 8:00 a.m. to 5:00 p.m.) But these are not the only reasons. There are other methods of access emerging that will probably render ISDN technology obsolete. Today's consumers are keenly aware of these trends, and many have adopted a wait-and-see attitude.

Cable modems offer one promising solution. These new devices, currently being tested throughout the United States, will reportedly deliver Net access at 100 times the speed of modems now in use. However, there are deep problems to be solved within the cable modem industry. For example, no standards have yet been established. Therefore, each cable modem will be entirely proprietary. With no standards, the price of cable modems will probably remain very high (ranging anywhere from $300 to $600). This could discourage most buyers. There are also issues as to what cable modem to buy. Their capabilities vary dramatically. Some, for example, offer extremely high throughput while receiving data but only meager throughput when transmitting it. For some users, this simply isn't suitable. A practical example would be someone who plans to video-conference on a regular basis. True, they could receive the image of their video-conference partner at high speed, but they would be unable to send at that same speed.

NOTE: There are other more practical problems that plague the otherwise bright future of cable modem connections. For example, consumers are told that they will essentially have the speed of a low-end T3 connection for $39 a month, but this is only partially true. Although their cable modem and the coax wire it's connected to are capable of such speeds, the average consumer will likely never see the full potential because all inhabitants in a particular area (typically a neighborhood) must share the bandwidth of the connection. For example, in apartment buildings, the 10mps is divided between the inhabitants patched into that wire. Thus, if a user in apartment 1A is running a search agent that collects hundreds of megabytes of information each day, the remaining inhabitants in other apartments will suffer a tremendous loss of bandwidth. This is clearly unsuitable.

Cross Reference: Cable modem technology is an aggressive climate now, with several dozen big players seeking to capture the lion's share of the market. To get in-depth information about the struggle (and what cable modems have to offer), point your Web browser to http://rpcp.mit.edu/~gingold/cable/.

Other technologies, such as WebTV, offer promise. WebTV is a device that makes surfing the Net as easy as watching television. These units are easily installed, and the interface is quite intuitive. However, systems such as WebTV may bring an unwanted influence to the Net: censorship. Many of the materials on the Internet could be characterized as highly objectionable. In this category are certain forms of hard-core pornography and seditious or revolutionary material. If WebTV were to become the standard method of Internet access, the government might attempt to regulate what type of material could appear. This might undermine the grass-roots, free-speech environment of the Net.

NOTE: Since the writing of this chapter, Microsoft Corporation has purchased WebTV (even though the sales for WebTV proved to be far less than industry experts had projected). Of course, this is just my personal opinion, but I think the idea was somewhat ill-conceived. The Internet is not yet an entertainment medium, nor will it be for some time, largely due to speed and bandwidth constraints. One wonders whether Microsoft didn't move prematurely in making its purchase. Perhaps Microsoft bought WebTV expressly for the purpose of shelving it. This is possible. After all, such a purchase would be one way to eliminate what seemed (at least at the time) to be some formidable competition to MSN.


Cross Reference: WebTV does have interesting possibilities and offers one very simple way to get acquainted with the Internet. If you are a new user and find Net navigation confusing, you might want to check out WebTV's home page at http://www.webtv.net/.

Either way, the Internet is about to become an important part of every person's life. Banks and other financial institutions are now offering banking over the Internet. Within five years, this will likely replace the standard method of banking. Similarly, a good deal of trade has been taken to the Net.


[Return to Introduction]

















Let's look at the the remote attack. I will define what such an attack is and demonstrate some key techniques employed. Moreover, this text will serve as a generalized primer for new system administrators, who may have never encountered the remote attack in real life.

What Is a Remote Attack?

A remote attack is any attack that is initiated against a machine that the attacker does not currently have control over; that is, it is an attack against any machine other than the attacker's own (whether that machine is on the attacker's subnet or 10,000 miles away). The best way to define a remote machine is this:

A remote machine is any machine--other than the one you are now on--that can be reached through some protocol over the Internet or any other network or medium.

The First Steps

The first steps, oddly enough, do not involve much contact with the target. (That is, they won't if the cracker is smart.) The cracker's first problem (after identifying the type of network, the target machines, and so on) is to determine with whom he is dealing. Much of this information can be acquired without disturbing the target. (We will assume for now that the target does not run a firewall. Most networks do not. Not yet, anyway.) Some of this information is gathered through the following techniques:

The techniques mentioned in this list may seem superfluous until you understand their value. Certainly, Farmer and Venema would agree on this point:

What should you do? First, try to gather information about your (target) host. There is a wealth of network services to look at: finger, showmount, and rpcinfo are good starting points. But don't stop there--you should also utilize DNS, whois, sendmail (smtp), ftp, uucp, and as many other services as you can find.

Cross Reference: The preceding paragraph is excerpted from Improving the Security of Your Site by Breaking Into It by Dan Farmer and Wietse Venema. It can be found online at http://www.craftwork.com/papers/security.html.

Collecting information about the system administrator is paramount. A system administrator is usually responsible for maintaining the security of a site. There are instances where the system administrator may run into problems, and many of them cannot resist the urge to post to Usenet or mailing lists for answers to those problems. By taking the time to run the administrator's address (and any variation of it, as I will explain in the next section), you may be able to gain greater insight into his network, his security, and his personality. Administrators who make such posts typically specify their architecture, a bit about their network topology, and their stated problem.

Even evidence of a match for that address (or lack thereof) can be enlightening. For example, if a system administrator is in a security mailing list or forum each day, disputing or discussing various security techniques and problems with fellow administrators, this is evidence of knowledge. In other words, this type of person knows security well and is therefore likely well prepared for an attack. Analyzing such a person's posts closely will tell you a bit about his stance on security and how he implements it. Conversely, if the majority of his questions are rudimentary (and he often has a difficult time grasping one or more security concepts), it might be evidence of inexperience.

From a completely different angle, if his address does not appear at all on such lists or in such forums, there are only a few possibilities why. One is that he is lurking through such groups. The other is that he is so bad-ass that he has no need to discuss security at all. (Basically, if he is on such lists at all, he DOES receive advisories, and that is, of course, a bad sign for the cracker, no matter what way you look at it. The cracker has to rely in large part on the administrator's lack of knowledge. Most semi-secure platforms can be relatively secure even with a minimal effort by a well-trained system administrator.)

In short, these searches make a quick (and painless) attempt to cull some important information about the folks at the other end of the wire.

You will note that I referred to "any variation" of a system administrator's address. Variations in this context mean any possible alternate addresses. There are two kinds of alternate addresses. The first kind is the individual's personal address. That is, many system administrators may also have addresses at or on networks other than their own. (Some administrators are actually foolish enough to include these addresses in the fields provided for address on an InterNIC record.) So, while they may not use their work address to discuss (or learn about) security, it is quite possible that they may be using their home address.

To demonstrate, I once cracked a network located in California. The administrator of the site had an account on AOL. The account on AOL was used in Usenet to discuss various security issues. By following this man's postings through Usenet, I was able to determine quite a bit. In fact (and this is truly extraordinary), his password, I learned, was the name of his daughter followed by the number 1.

The other example of a variation of an address is this: either the identical address or an address assigned to that person's same name on any machine within his network. Now, let's make this a little more clear. First, on a network that is skillfully controlled, no name is associated with root. That is because root should be used as little as possible and viewed as a system ID, not to be invoked unless absolutely necessary. (In other words, because su and perhaps other commands or devices exist that allow an administrator to do his work, root need not be directly invoked, except in a limited number of cases.)

NOTE: Attacking a network run on Windows NT is a different matter. In those cases, you are looking to follow root (or rather, Administrator) on each box. The design of NT makes this a necessity, and Administrator on NT is vastly different from root on a UNIX box.

Because root is probably not invoked directly, the system administrator's ID could be anything. Let's presume here that you know that ID. Let's suppose it is walrus. Let us further suppose that on the host query that you conducted, there are about 150 machines. Each of those machines has a distinct name. For example, there might be mail.victim.net, news.victim.net, shell.victim.net, cgi.victim.net, and so forth. (Although, in practice, they will more likely have "theme" names that obscure what the machine actually does, like sabertooth.victim.net, bengal.victim.net, and lynx.victim.net.)

The cracker should try the administrator's address on each machine. Thus, he will be trying walrus@shell.victim.net, walrus@sabertooth.victim.net, and so forth. (This is what I refer to as a variation on a target administrator's address.) In other words, try this on each box on the network, as well as run all the general diagnostic stuff on each of these machines. Perhaps walrus has a particular machine that he favors, and it is from this machine that he does his posting.

Here's an interesting note: If the target is a provider (or other system that one can first gain legitimate access to), you can also gain an enormous amount of information about the system administrator simply by watching where he is coming in from. This, to some extent, can be done from the outside as well, with a combination of finger and rusers. In other words, you are looking to identify foreign networks (that is, networks other than the target) on which the system administrator has accounts. Obviously, if his last login was from Netcom, he has an account on Netcom. Follow that ID for a day or so and see what surfaces.

About Finger Queries

In the previously referenced paper by Farmer and Venema (a phenomenal and revolutionary document in terms of insight), one point is missed: The use of the finger utility can be a dangerous announcement of your activities. What if, for example, the system administrator is running MasterPlan?

TIP: MasterPlan is a utility whose function is to trap and log all finger queries directed to the user; that is, MasterPlan will identify the IP of the party doing the fingering, the time that such fingering took place, the frequency of such fingering, and so forth. It basically attempts to gather as much information about the person fingering you as possible. Also, it is not necessary that they use MasterPlan. The system administrator might easily have written his own hacked finger daemon, one that perhaps even traces the route back to the original requesting party--or worse, fingers them in return.

To avoid the possibility of their finger queries raising any flags, most crackers use finger gateways. Finger gateways are Web pages, and they usually sport a single input field that points to a CGI program on the drive of the remote server that performs finger lookup functions. A finger gateway is avaliable on the Hackers Homepage (hakkerz.home.ml.org).

By using a finger gateway, the cracker can obscure his source address. That is, the finger query is initiated by the remote system that hosts the finger gateway. (In other words, not the cracker's own machine but some other machine.) True, an extremely paranoid system administrator might track down the source address of that finger gateway; he might even contact the administrator of the finger gateway site to have a look at the access log there. In this way, he could identify the fingering party. That this would happen, however, is quite unlikely, especially if the cracker staggers his gateways. In other words, if the cracker intends to do any of this type of work "by hand," he should really do each finger query from a different gateway. Because there are 3,000+ finger gateways currently on the Web, this is not an unreasonable burden. Furthermore, if I were doing the queries, I would set them apart by several minutes (or ideally, several hours).

NOTE: One technique involves the redirection of a finger request. This is where the cracker issues a raw finger request to one finger server, requesting information from another. This is referred to as forwarding a finger request. The syntax of such a command is finger user@real_target.com@someother_host.com. For example, if you wanted to finger a user at primenet.com, you might use deltanet.com's finger service to forward the request. However, in today's climate, most system administrators have finger forwarding turned off.

The Operating System

You may have to go through various methods (including but not limited to those described in the preceding section) to identify the operating system and version being used on the target network. In earlier years, one could be pretty certain that the majority of machines on a target network ran similar software on similar hardware. Today, it is another ball game entirely. Today, networks may harbor dozens of different machines with disparate operating systems and architecture. One would think that for the cracker, this would be a hostile and difficult-to-manage environment. Not so.

The more diverse your network nodes are (in terms of operating system and architecture), the more likely it is that a security hole exists. There are reasons for this, and while I do not intend to explain them thoroughly, I will relate at least this: Each operating system has its own set of bugs. Some of these bugs are known, and some may be discovered over time. In a relatively large network, where there may be many different types of machines and software, you have a better chance of finding a hole. The system administrator is, at day's end, only a human being. He cannot be constantly reviewing security advisories for each platform in turn. There is a strong chance that his security knowledge of this or that system is weak.

In any event, once having identified the various operating systems and architectures available at the target, the next step is study. A checklist should be made that lists each operating system and machine type. This checklist will assist you tremendously as you go to the next step, which is to identify all known holes on that platform and understand each one.

NOTE: Some analysts might make the argument that tools like ISS and SATAN will identify all such holes automatically and, therefore, research need not be done. This is erroneous, for several reasons. First, such tools may not be complete in their assessment. Here is why: Although both of the tools mentioned are quite comprehensive, they are not perfect. For example, holes emerge each day for a wide range of platforms. True, both tools are extensible, and one can therefore add new scan modules, but the scanning tools that you have are limited to the programmer's knowledge of the holes that existed at the time of the coding of the application.

Therefore, to make a new scanning module to be added to these extensible and malleable applications, you must first know that such new holes exist. Second, and perhaps more importantly, simply knowing that a hole exists does not necessarily mean that you can exploit it--you must first understand it. (Unless, of course, the hole is an obvious and self-explanatory one, such as the -froot rlogin problem on some versions of the AIX operating system. By initiating an rlogin session with the -froot flags, you can gain instant privileged access on many older AIX-based machines.) For these reasons, hauling off and running a massive scan is a premature move.

To gather this information, you will need to visit a few key sites. The first such site you need to visit is the firewalls mailing list archive page.

Cross Reference: The firewalls mailing list archive page can be found online at http://www.netsys.com/firewalls/ascii-index.html.

You may initially wonder why this list would be of value, because the subject discussed is firewall-related. (Remember, we began this chapter with the presumption that the target was not running a firewall.) The firewalls list archive is valuable because it is often used--over the objections of many list members--to discuss other security-related issues. Another invaluable source of such data is BUGTRAQ, which is a searchable archive of known vulnerabilities on various operating systems (though largely UNIX.)

Cross Reference: BUGTRAQ is located online at http://www.geek-girl.com/bugtraq/search.html.

These searchable databases are of paramount importance. A practical example will help tremendously at this point. Suppose that your target is a machine running AIX. First, you would go to the ARC Searchable WAIS Gateway for DDN and CERT advisories.

Cross Reference: The ARC Searchable WAIS Gateway for DDN and CERT advisories can be found online at http://info.arc.com/sec_bull/sec_bullsearch.html.

At the bottom of that page is an input field. Into it, I entered the search term AIX. The results of that search produce a laundry list of AIX vulnerabilities.

At this stage, you can begin to do some research. After reading the initial advisory, if there is no more information than a simple description of the vulnerability, do not despair. You just have to go to the next level. The next phase is a little bit more complex. After identifying the most recent weakness (and having read the advisory), you must extract from that advisory (and all that follow it) the commonly used, often abbreviated, or "jargon," name for the hole. For example, after a hole is discovered, it is often referred to by security folks with a name that may not reflect the entire problem. (An example would be "the Linux telnetd problem" or "AIX's froot hole" or some other, brief term by which the hole becomes universally identified.) The extraction process is quickly done by taking the ID number of the advisory and running it through one of the abovementioned archives like BUGTRAQ or Firewalls. Here is why:

Typically, when a security professional posts either an exploit script, a tester script (tests to see if the hole exists) or a commentary, they will almost always include complete references to the original advisory. Thus, you will see something similar to this in their message: Here's a script to test if you are vulnerable to the talkd problem talked about in CA-97.04..

This message is referring to CERT Advisory number 97.04, which was first issued on January 27, 1997. By using this number as a search expression, you will turn up all references to it. After reading 10 or 12 results from such a search, you will know what the security crowd is calling that hole. After you have that, you can conduct an all-out search in all legitimate and underground database sources to get every shred of information about the hole. You are not looking for initial postings in particular, but subsequent, trailing ones. (Some archives have an option where you can specify a display by thread; these are preferred. This allows you to see the initial posting and all subsequent postings about that original message; that is, all the "re:" follow-ups.) However, some search engines do not provide for an output in threaded form; therefore, you will simply have to rake through them by hand.

The reason that you want these follow-ups is because they usually contain exploit or test scripts (programs that automatically test or simulate the hole). They also generally contain other technical information related to the hole. For example, one security officer might have found a new way to implement the vulnerability, or might have found that an associated program (or include file or other dependency) may be the real problem or even a great contributor to the hole. The thoughts and reflections of these individuals are pure gold, particularly if the hole is a new one. These individuals are actually doing all the work for you: analyzing and testing the hole, refining attacks against it, and so forth.

TIP: Many exploit and test scripts are posted in standard shell or C language and are therefore a breeze to either reconfigure for your own system or compile for your architecture. In most instances, only minimal work has to be done to make them work on your platform.

So, to this point, you have defined a portion (or perhaps all) of the following chief points:

Now you can proceed to the next step.

One point of interest: It is extremely valuable if you can also identify machines that may be co-located. This is, of course, strictly in cases where the target is an Internet service provider (ISP). ISPs often offer deals for customers to co-locate a machine on their wire. There are certain advantages to this for the customer. One of them is cost. If the provider offers to co-locate a box on its T3 for, say, $600 a month, this is infinitely less expensive than running a machine from your own office that hooks into a T1. A T1 runs about $900-$1,200 monthly. You can see why co-location is popular: You get speeds far faster for much less money and headache. For the ISP, it is nothing more than plugging a box into its Ethernet system. Therefore, even setup and administration costs are lower. And, perhaps most importantly of all, it takes the local telephone company out of the loop. Thus, you cut even more cost, and you can establish a server immediately instead of waiting six weeks.

These co-located boxes may or may be not be administrated by the ISP. If they are not, there is an excellent chance that these boxes may either have (or later develop) holes. This is especially likely if the owner of the box employs a significant amount of CGI or other self-designed program modules that the ISP has little or no control over. By compromising that box, you have an excellent chance of bringing the entire network under attack, unless the ISP has purposefully strung the machine directly to its own router, a hub (or instituted some other procedure of segmenting the co-located boxes from the rest of the network.)

NOTE: This can be determined to some degree using traceroute or whois services. In the case of traceroute, you can identify the position of the machine on the wire by examining the path of the traced route. In a whois query, you can readily see whether the box has its own domain server or whether it is using someone else's (an ISP's).

Doing a Test Run

The test-run portion of the attack is practical only for those individuals who are serious about cracking. Your average cracker will not undertake such activity, because it involves spending a little money. However, if I were counseling a cracker, I would recommend it.

This step involves establishing a single machine with the identical distribution as the target. Thus, if the target is a SPARCstation 2 running Solaris 2.4, you would erect an identical machine and string it to the Net via any suitable method (by modem, ISDN, Frame Relay, T1, or whatever you have available). After you have established the machine, run a series of attacks against it. There are two things you are looking for:

There are a number of reasons for this, and some are not so obvious. In examination of the logs on the attacking side, the cracker can gain an idea of what the attack should look like if his target is basically unprotected--in other words, if the target is not running custom daemons. This provides the cracker a little road map to go by; certainly, if his ultimate scan and attack of the target do not look nearly identical, this is cause for concern. All things being equal, an identically configured machine (or, I should say, an apparently identically configured machine) should respond identically. If it does not, the folks at the target have something up their sleeve. In this instance, the cracker would be wise to tread carefully.

By examining the victim-side logs, the cracker can get a look at what his footprint will look like. This is also important to know. On diverse platforms, there are different logging procedures. The cracker should know at a minimum exactly what these logging procedures are; that is, he needs to know each and every file (on the identically configured machine) that will show evidence of an intrusion. This information is paramount, because it serves as a road map also: It shows him exactly what files have to be altered to erase any evidence of his attack. The only way to identify these files for certain is to conduct a test under a controlled environment and examine the logs for themselves.

In actual attacks, there should be only a few seconds (or minutes at most) before root (or some high level of privilege) is obtained. Similarly, it should be only seconds thereafter (or minutes at worst) before evidence of that intrusion is erased. For the cracker, any other option is a fatal one. They may not suffer from it in the short run, but in the long run, they will end up in handcuffs.

This step is not as expensive as you would think. There are newsgroups (most notably, misc.forsale.computers.workstation) where one can obtain the identical machine (or a close facsimile) for a reasonable price. Generally, the seller of such a machine will load a full version of the operating system "for testing purposes only." This is their way of saying "I will give you the operating system, which comes without a license and therefore violates the license agreement. If you keep it and later come under fire from the vendor, you are on your own."

Even licensed resellers will do this, so you can end up with an identical machine without going to too much expense. (You can also go to defense contracting firms, many of which auction off their workstations for a fraction of their fair market value. The only bar here is that you must have the cash ready; you generally only get a single shot at a bid.)

Other possibilities include having friends set up such a box at their place of work or even at a university. All you really need are the logs. I have always thought that it would be a good study practice to maintain a database of such logs per operating system per attack and per scanner--in other words, have a library of what such attacks look like, given the aforementioned variables. This, I think, would be a good training resource for new system administrators, something like "This is what a SS4 looks like when under attack by someone using ISS. These are the log files you need to look for and this is how they will appear."

Surely, a script could be fashioned (perhaps an automated one) that would run a comparative analysis against the files on your workstation. This process could be done once a day as a cron job. It seems to me that at least minimal intrusion-detection systems could be designed this way. Such tools do exist, but have been criticized by many individuals because they can be "fooled" too easily. There is an excellent paper that treats this subject, at least with respect to SunOS. It is titled USTAT: A Real Time Intrusion Detection System for UNIX. (This paper was, in fact, a thesis for the completion of a master's in computer science at the University of Santa Barbara, California. It is very good.) In the abstract, the author writes:

In this document, the development of the first USTAT prototype, which is for SunOS 4.1.1, is described. USTAT makes use of the audit trails that are collected by the C2 Basic Security Module of SunOS, and it keeps track of only those critical actions that must occur for the successful completion of the penetration. This approach differs from other rule-based penetration identification tools that pattern match sequences of audit records.

Cross Reference: The preceding paragraph is excerpted from USTAT: A Real Time Intrusion Detection System for UNIX by Koral Ilgun. It can be found online at ftp://coast.cs.purdue.edu/pub/doc/intrusion_detection/ustat.ps.gz

Although we proceeded under the assumption that the target network was basically an unprotected, out-of-the-box install, I thought I should mention tools like the one described in the paper referenced previously. The majority of such tools have been employed on extremely secure networks--networks often associated with classified or even secret or top-secret work.

Another interesting paper lists a few of these tools and makes a brief analysis of each. It discusses how

Computer security officials at the system level have always had a challenging task when it comes to day-to-day mainframe auditing. Typically the auditing options/features are limited by the mainframe operating system and other system software provided by the hardware vendor. Also, since security auditing is a logical subset of management auditing, some of the available auditing options/features may be of little value to computer security officials. Finally, the relevant auditing information is probably far too voluminous to process manually and the availability of automated data reduction/analysis tools is very limited. Typically, 95% of the audit data is of no security significance. The trick is determining which 95% to ignore.

Cross Reference: The previous paragraph is excerpted from Summary of the Trusted Information Systems (TIS) Report on Intrusion Detection Systems, prepared by Victor H. Marshall, Systems Assurance Team Leader, Booz, Allen & Hamilton Inc. This document can be found online at ftp://coast.cs.purdue.edu/pub/doc/intrusion_detection/auditool.txt.Z

In any event, this "live" testing technique should be primarily employed where there is a single attack point. Typical situations are where you suspect that one of the workstations is the most viable target (where perhaps the others will refuse all connections from outside the subnet and so forth). Obviously, I am not suggesting that you erect an exact model of the target network; that could be cost and time prohibitive. What I am suggesting is that in coordination of a remote attack, you need to have (at a minimum) some idea of what is supposed to happen. Simulating that attack on a host other than the target is a wise thing to do. Otherwise, there is no guarantee that you can even marginally ensure that the data you receive back has some integrity. Bellovin's paper on Berferd should be a warning to any cracker that a simulation of a vulnerable network is not out of the question. In fact, I have wondered many times why security technologies have not focused entirely on this type of technique, especially since scanners have become so popular.

What is the difficulty in a system administrator creating his own such system on the fly? How difficult would it be for an administrator to write custom daemons (on a system where the targeted services aren't even actually running) that would provide the cracker with bogus responses? Isn't this better than announcing that you have a firewall (or TCP_WRAPPER), therefore alerting the attacker to potential problems? Never mind passive port-scanning utilities, let's get down to the nitty-gritty: This is how to catch a cracker--with a system designed exclusively for the purpose of creating logs that demonstrate intent. This, in my opinion, is where some new advances ought to be made. These types of systems offer automation to the process of evidence gathering.

The agencies that typically utilize such tools are few. Mostly, they are military organizations. An interesting document is available on the Internet in regard to military evaluations and intrusion detection. What is truly interesting about the document is the flair with which it is written. For instance, sample this little excerpt:

For 20 days in early spring 1994, Air Force cybersleuths stalked a digital delinquent raiding unclassified computer systems at Griffiss AFB, NY. The investigators had staked out the crime scene--a small, 12-by-12-foot computer room in Rome Laboratory's Air Development Center--for weeks, surviving on Jolt cola, junk food and naps underneath desks. Traps were set by the Air Force Information Warfare Center to catch the bandit in the act, and `silent' alarms sounded each time their man slinked back to survey his handiwork. The suspect, who dubbed himself `Data Stream,' was blind to the surveillance, but despite this, led pursuers on several high-speed chases that don't get much faster--the speed of light. The outlaw was a computer hacker zipping along the ethereal lanes of the Internet, and tailing him was the information superhighway patrol--the Air Force Office of Special Investigations computer crime investigations unit.

Cross Reference: The previous paragraph is excerpted from "Hacker Trackers: OSI Computer Cops Fight Crime On-Line" by Pat McKenna. It can be found online at http://www.af.mil/pa/airman/0496/hacker.htm.

The document doesn't give as much technical information as one would want, but it is quite interesting, all the same. Probably a more practical document for the legal preservation of information in the investigation of intrusions is one titled "Investigating and Prosecuting Network Intrusions." It was authored by John C. Smith, Senior Investigator in the Computer Crime Unit of the Santa Clara County District Attorney's Office.

Cross Reference: "Investigating and Prosecuting Network Intrusions" can be found online at http://www.eff.org/pub/Legal/scda_cracking_investigation.paper.

In any event, as I have said, at least some testing should be done beforehand. That can only be done by establishing a like box with like software.

Tools: About Holes and Other Important Features

Next, you need to assemble the tools you will actually use. These tools will most probably be scanners. You will be looking (at a minimum) to identify all services now running on the target. Based on your analysis of the operating system (as well as the other variables I've mentioned in this chapter), you will need to evaluate your tools to determine what areas or holes they do not cover.

In instances where a particular service is covered by one tool but not another, it is best to integrate the two tools together. The ease of integration of such tools will depend largely on whether these tools can simply be attached as external modules to a scanner like SATAN or SAFESuite. Again, here the use of a test run can be extremely valuable; in most instances, you cannot simply attach an external program and have it work flawlessly.

To determine the exact outcome of how all these tools will work in concert, it is best to do this at least on some machine (even if it is not identical to the target). That is because, here, we are concerned with whether the scan will be somehow interrupted or corrupted as the result of running two or more modules of disparate design. Remember that a real-time scanning attack should be done only once. If you screw it up, you might not get a second chance.

So, you will be picking your tools (at least for the scan) based on what you can reasonably expect to find at the other end. In some cases, this is an easy job. For example, perhaps you already know that someone on the box is running X Window System applications across the Net. (Not bloody likely, but not unheard of.) In that case, you will also be scanning for xhost problems, and so it goes.

Remember that a scanner is a drastic solution. It is the equivalent of running up to an occupied home with a crowbar in broad daylight, trying all the doors and windows. If the system administrator is even moderately tuned into security issues, you have just announced your entire plan.

TIP: There are some measures you can take to avoid that announcement, but they are drastic: You can actually institute the same security procedures that other networks do, including installing software (sometimes a firewall and sometimes not) that will refuse to report your machine's particulars to the target. There are serious problems with this type of technique, however, as they require a high level of skill. (Also, many tools will be rendered useless by instituting such techniques. Some tools are designed so that one or more functions require the ability to go out of your network, through the router, and back inside again.)

Again, however, we are assuming here that the target is not armored; it's just an average site, which means that we needn't stress too much about the scan. Furthermore, as Dan Farmer's recent survey suggests, scanning may not be a significant issue anyway. According to Farmer (and I have implicit faith in his representations, knowing from personal experience that he is a man of honor), the majority of networks don't even notice the traffic:

...no attempt was made to hide the survey, but only three sites out of more than two thousand contacted me to inquire what was going on when I performed the unauthorized survey (that's a bit over one in one thousand questioning my activity). Two were from the normal survey list, and one was from my random group.

Cross Reference: The preceding paragraph is excerpted from the introduction of Shall We Dust Moscow? by Dan Farmer. This document can be found online at http://www.trouble.org/survey/introduction.html

That scan involved over 2,000 hosts, the majority of which were fairly sensitive sites (for example, banks). You would expect that these sites would be ultra-paranoid, filtering every packet and immediately jumping on even the slightest hint of a scan.

Developing an Attack Strategy

The days of roaming around the Internet, cracking this and that server are basically over. Years ago, compromising the security of a system was viewed as a minor transgression as long as no damage was done. Today, the situation is different. Today, the value of data is becoming an increasingly talked-about issue. Therefore, the modern cracker would be wise not to crack without a reason. Similarly, he would be wise to set forth cracking a server only with a particular plan.

The only instance in which this does not apply is where the cracker is either located in a foreign state that has no specific law against computer intrusion (Berferd again) or one that provides no extradition procedure for that particular offense (for example, the NASA case involving a student in Argentina). All other crackers would be wise to tread very cautiously.

Your attack strategy may depend on what you are wanting to accomplish. We will assume, however, that the task at hand is basically nothing more than compromise of system security. If this is your plan, you need to lay out how the attack will be accomplished. The longer the scan takes (and the more machines that are included within it), the more likely it is that it will be immediately discovered. Also, the more scan data that you have to sift through, the longer it will take to implement an attack based upon that data. The time that elapses between the scan and the actual attack, as I've mentioned, should be short.

Some things are therefore obvious (or should be). If you determine from all of your data collection that certain portions of the network are segmented by routers, switches, bridges, or other devices, you should probably exclude those from your scan. After all, compromising those systems will likely produce little benefit. Suppose you gained root on one such box in a segment. How far do you think you could get? Do you think that you could easily cross a bridge, router, or switch? Probably not. Therefore, sniffing will only render relevant information about the other machines in the segment, and spoofing will likewise work (reliably) only against those machines within the segment. Because what you are looking for is root on the main box (or at least, within the largest network segment available), it is unlikely that a scan on smaller, more secure segments would prove to be of great benefit.

NOTE: Of course, if these machines (for whatever reason) happen to be the only ones exposed, by all means, attack them (unless they are completely worthless). For example, it is a common procedure to place a Web server outside the network firewall or make that machine the only one accessible from the void. Unless the purpose of the exercise is to crack the Web server (and cause some limited, public embarrassment to the owners of the Web box), why bother? These machines are typically "sacrificial" hosts--that is, the system administrator has anticipated losing the entire machine to a remote attack, so the machine has nothing of import upon its drives. Nothing except Web pages, that is.

In any event, once you have determined the parameters of your scan, implement it.

A Word About Timing Scans

When should you implement a scan? The answer to this is really "never." However, if you are going to do it, I would do it late at night relative to the target. Because it is going to create a run of connection requests anyway (and because it would take much longer if implemented during high-traffic periods), I think you might as well take advantage of the graveyard shift. The shorter the time period, the better off you are.

After the Scan

After you have completed the scan, you will be subjecting the data to analysis. The first issue you want to get out of the way is whether the information is even authentic. (This, to some degree, is established from your sample scans on a like machine with the like operating system distribution.)

Analysis is the next step. This will vary depending upon what you have found. Certainly, the documents included in the SATAN distribution can help tremendously in this regard. Those documents (tutorials about vulnerabilities) are brief, but direct and informative. They address the following vulnerabilities:

In addition to these pieces of information, you should apply any knowledge that you have gained through the process of gathering information on the specific platform and operating system. In other words, if a scanner reports a certain vulnerability (especially a newer one), you should refer back to the database of information that you have already built from raking BUGTRAQ and other searchable sources.

This is a major point: There is no way to become either a master system administrator or a master cracker overnight. The hard truth is this: You may spend weeks studying source code, vulnerabilities, a particular operating system, or other information before you truly understand the nature of an attack and what can be culled from it. Those are the breaks. There is no substitute for experience, nor is there a substitute for perseverance or patience. If you lack any of these attributes, forget it.

That is an important point to be made here. Whether we are talking about individuals like Kevin Mitnik (cracker) or people like Weitse Venema (hacker), it makes little difference. Their work and their accomplishments have been discussed in various news magazines and online forums. They are celebrities within the Internet security (and in some cases, beyond). However, their accomplishments (good or bad) resulted from hard work, study, ingenuity, thought, imagination, and self-application. Thus, no firewall will save a security administrator who isn't on top of it, nor will SATAN help a newbie cracker to unlawfully breach the security of a remote target. That's the bottom line.


Remote attacks are becoming increasingly common. As discussed elsewhere, the ability to run a scan has become more within the grasp of the average user. Similarly, the proliferation of searchable vulnerability indexes have greatly enhanced one's ability to identify possible security issues.

Some individuals suggest that the free sharing of such information is itself contributing to the poor state of security on the Internet. That is incorrect. Rather, system administrators must make use of such publicly available information. They should, technically, perform the procedures described here on their own networks. It is not so much a matter of cost as it is time.

One interesting phenomenon is the increase in tools to attack Windows NT boxes. Not just scanning tools, either, but sniffers, password grabbers, and password crackers. In reference to remote attack tools, though, the best tool available for NT is SAFEsuite by Internet Security Systems (ISS). It contains a wide variety of tools, although the majority were designed for internal security analysis.

For example, consider the Intranet Scanner, which assesses the internal security of a network tied to a Microsoft Windows NT server. Note here that I write only that the network is tied to the NT server. This does not mean that all machines on the network must run NT in order for the Intranet Scanner to work. Rather, it is designed to assess a network that contains nodes of disparate architecture and operating systems. So, you could have boxes running Windows 95, UNIX, or potentially other operating systems running TCP/IP. The title of the document is "Security Assessment in the Windows NT Environment: A White Paper for Network Security Professionals." It discusses the many features of the product line and a bit about Windows NT security in general.

Cross Reference: To get a better idea of what Intranet Scanner offers, check out http://eng.iss.net/prod/winnt.html.

Specific ways to target specific operating systems (as in "How To" sections) are beyond the scope of this book, not because I lack the knowledge but because it could take volumes to relate. To give you a frame of reference, consider this: The Australian CERT (AUSCERT) UNIX Security Checklist consists of at least six pages of printed information. The information is extremely abbreviated and is difficult to interpret by anyone who is not well versed in UNIX. Taking each point that AUSCERT raises and expanding it into a detailed description and tutorial would likely create a 400-page book, even if the format contained simple headings such as Daemon, Holes, Source, Impact, Platform, Examples, Fix, and so on. (That document, by the way, discussed elsewhere in this book, is the definitive list of UNIX security vulnerabilities.

In closing, a well-orchestrated and formidable remote attack is not the work of some half-cocked cracker. It is the work of someone with a deep understanding of the system--someone who is cool, collected, and quite well educated in TCP/IP. (Although that education may not have come in a formal fashion.) For this reason, it is a shame that crackers usually come to such a terrible end. One wonders why these talented folks turn to the dark side.

I know this, though: It has nothing to do with money. There are money-oriented crackers, and they are professionals. But the hobbyist cracker is a social curiosity--so much talent and so little common sense. It is extraordinary, really, for one incredible reason: It was crackers who spawned most of the tools in this book. Their activities gave rise to the more conventional (and more talented) computing communities that are coding special security applications. Therefore, the existence of specialized tools is really a monument to the cracking community. They have had a significant impact, and one such impact was the development of the remote attack. The technique not only exists because of these curious people, but also grows in complexity because of them.


[Return to The Introduction]





















A lot of people have been concerned recently about their anonymity on the internet. Deleting
cookies etc. and paying money for services like the anonymizer. Well here's an easy (and free)
way to get around this problem. Use the proxy machines of other servers.

Finding them is pretty easy, the best way is probably the ping program. From a MS-DOS window in
Win95 (or from your shell) type:

ping proxy.name-of-some-isp.net

If you a message like 'bad IP' then there's no such machine but if you get ICMP echo information,
then obviously there is. Also you can try looking around the isp/companies web site for
information, or perhaps a forged (probably too busy, unconcerned to check.) email to the system
admin asking if they have a proxy machine.
If you need the addresses of some ISPs check out yahoo.com

To use a proxy through your web browser, in Netscape, click on Options|Network Preferences then
click on the 'Proxies' tab and check the radio button 'Manual Proxy config' and then click the
'view' button. Set it up for whatever protocols you want, (some proxies might only support HTTP)
probably FTP and HTTP. *Most* proxy machines operate on port 8080 but not always. 3128 is also common.

Email the admin and ask. Be polite :) In Internet Explorer, click View|Options| then click on the 'Connection' tab and set it up a la Netscape.

Once you have done this properly, you're real IP address won't show up on:

Guestbooks, counterlogs.
WWW Boards.
Java/html chat rooms.
Anonymous FTP through your browser.
Cookies will be useless.
So don't bother with the anonymizer.
Allowed into 'customer only' FTP servers.
Many isp's have 'customer only' sections of their web sites (through CGI) you can access them and
find out stuff about their servers, get free counters etc.  

*Also* if you have a "web email" account (Hotmail, Rocketmail, etc.) if you post a message through the
web interface, the IP of the proxy and *not* that of yours will show up.

It is probably not safe to use the proxy for hacking (denial of service attacks via anon ftp or
whatever) as the owner of the proxy machine would probably give away your IP address to whoever
you've been picking on. *Also* some squid (common) proxies do give your real IP for certain cgi requests.


[Return to the Introduction]























This chapter examines password crackers. Because these tools are of such significance in security, I will cover many different types, including those not expressly designed to crack Internet-related passwords.

What Is a Password Cracker?

The term password cracker can be misinterpreted, so I want to define it here. A password cracker is any program that can decrypt passwords or otherwise disable password protection. A password cracker need not decrypt anything. In fact, most of them don't. Real encrypted passwords, as you will shortly learn, cannot be reverse-decrypted.

A more precise way to explain this is as follows: encrypted passwords cannot be decrypted. Most modern, technical encryption processes are now one-way (that is, there is no process to be executed in reverse that will reveal the password in plain text).

Instead, simulation tools are used, utilizing the same algorithm as the original password program. Through a comparative analysis, these tools try to match encrypted versions of the password to the original (this is explained a bit later in this chapter). Many so-called password crackers are nothing but brute-force engines--programs that try word after word, often at high speeds. These rely on the theory that eventually, you will encounter the right word or phrase. This theory has been proven to be sound, primarily due to the factor of human laziness. Humans simply do not take care to create strong passwords. However, this is not always the user's fault:

Users are rarely, if ever, educated as to what are wise choices for passwords. If a password is in the dictionary, it is extremely vulnerable to being cracked, and users are simply not coached as to "safe" choices for passwords. Of those users who are so educated, many think that simply because their password is not in /usr/dict/words, it is safe from detection. Many users also say that because they do not have private files online, they are not concerned with the security of their account, little realizing that by providing an entry point to the system they allow damage to be wrought on their entire system by a malicious cracker.1

1Daniel V. Klein, A Survey of, and Improvements to, Password Security. Software Engineering Institute, Carnegie Mellon University, Pennsylvania. (PostScript creation date reported: February 22, 1991.)


The problem is a persistent one, despite the fact that password security education demands minimal resources. It is puzzling how such a critical security issue (which can easily be addressed) is often overlooked. The issue goes to the very core of security:

...exploiting ill-chosen and poorly-protected passwords is one of the most common attacks on system security used by crackers. Almost every multi-user system uses passwords to protect against unauthorized logons, but comparatively few installations use them properly. The problem is universal in nature, not system-specific; and the solutions are simple, inexpensive, and applicable to any computer, regardless of operating system or hardware. They can be understood by anyone, and it doesn't take an administrator or a systems programmer to implement them.2

2K. Coady. Understanding Password Security For Users on & offline. New England Telecommuting Newsletter, 1991.


In any event, I want to define even further the range of this chapter. For our purposes, people who provide registration passwords or CD keys are not password crackers, nor are they particularly relevant here. Individuals who copy common registration numbers and provide them over the Internet are pirates. I discuss these individuals (and yes, I point to some sites) at the end of this chapter. Nevertheless, these people (and the files they distribute, which often contain thousands of registration numbers) do not qualify as password crackers.

NOTE: These registration numbers and programs that circumvent password protection are often called cracks. A Usenet newsgroup has actually been devoted to providing such passwords and registration numbers. Not surprisingly, within this newsgroup, many registration numbers are routinely trafficked, and the software to which they apply is also often posted there. That newsgroup is appropriately called alt.cracks.

The only exception to this rule is a program designed to subvert early implementations of the Microsoft CD key validation scheme (although the author of the source code did not intend that the program be used as a piracy tool). Some explanation is in order.

As part of its anti-piracy effort, Microsoft developed a method of consumer authentication that makes use of the CD key. When installing a Microsoft product for the first time, users are confronted by a dialog box that requests the CD key. This is a challenge to you; if you have a valid key, the software continues to install and all is well. If, however, you provide an invalid key, the installation routine exits on error, explaining that the CD key is invalid.

Several individuals examined the key validation scheme and concluded that it was poorly designed. One programmer, Donald Moore, determined that through the following procedure, a fictional key could be tested for authenticity. His formula is sound and basically involves these steps:

1. Take all numbers that are trivial and irrelevant to the key and discard them.

2. Add the remaining numbers together.

3. Divide the result by 7.

The number that you derive from this process is examined in decimal mode. If the number has no fractional part (there are no numeric values to the right of the decimal point), the key is valid. If the number contains a fractional part (there are numbers to the right of the decimal), the key is invalid. Moore then designed a small program that would automate this process.

Cross Reference: Moore's complete explanation and analysis of the CD key validation routine is located at http://www.apexsc.com/vb/lib/lib3.html.

The programmer also posted source code to the Internet, written in garden-variety C. I have compiled this code on several platforms and it works equally well on all. (The platforms I have compiled it on include DOS, NT, Linux, and AIX.) The utility is quite valuable, I have found, for I often lose my CD keys.

Cross Reference: The source code is located at http://www.futureone.com/~damaged/PC/Microsoft_CD_Key/mscdsrc.html.

This type of utility, I feel, qualifies in this chapter as a form of password cracker. I suspect that some of you will use this utility to subvert the CD key validation. However, in order to do so, you must first know a bit of C (and have a compiler available). My feeling is, if you have these tools, your level of expertise is high indeed, and you are probably beyond stealing software from Microsoft. (I hope.)

NOTE: Microsoft's method of protecting upgrade packages is also easily bypassed. Upgrades install as long as you have the first disk of a previous version of the specified software. Therefore, a user who obtains the first disk of Microsoft Visual Basic Professional 3.0, for example, can install the 4.0 upgrade. For this reason, some pirate groups distribute images of that first disk, which are then written to floppies. (In rare instances when the exact image must appear on the floppy, some people use rawrite.exe or dd.exe, two popular utilities that write an image directly to a floppy. This technique differs from copying it to a floppy.) In addition, it is curious to note that certain upgrade versions of VB will successfully install even without the floppy providing that Microsoft Office has been installed first.

I should make it clear that I do not condone piracy (even though I feel that many commercial software products are criminally overpriced). I use Linux and GNU. In that respect, I owe much to Linus Torvalds and Richard Stallman. I have no fear of violating the law because most of the software I use is free to be redistributed to anyone. (Also, I have found Linux to be more stable than many other operating systems that cost hundreds of dollars more.)

Linux is an entirely copy-free operating system, and the GNU suite of programs is under the general public license. That is, you are free to redistribute these products to anyone at any time. Doing so does not violate any agreement with the software authors. Many of these utilities are free versions of popular commercial packages, including C and C++ compilers, Web-development tools, or just about anything you can dream of. These programs are free to anyone who can download them. They are, quite frankly, a godsend to anyone studying development.

In any event, the password crackers I will be examining here are exactly that: they crack, destroy, or otherwise subvert passwords. I provide information about registration cracks at the end of the chapter. That established, let's move forward.

How Do Password Crackers Work?

To understand how password crackers work, you need only understand how password generators work. Most password generators use some form of cryptography. Cryptography is the practice of writing in some form of code.


This definition is wide, and I want to narrow it. The etymological root of the word cryptography can help in this regard. Crypto stems from the Greek word kryptos. Kryptos was used to describe anything that was hidden, obscured, veiled, secret, or mysterious. Graph is derived from graphia, which means writing. Thus, cryptography is the art of secret writing. An excellent and concise description of cryptography is given by Yaman Akdeniz in his paper Cryptography & Encryption:

Cryptography defined as "the science and study of secret writing," concerns the ways in which communications and data can be encoded to prevent disclosure of their contents through eavesdropping or message interception, using codes, ciphers, and other methods, so that only certain people can see the real message.3

3Yaman Akdeniz, Cryptography & Encryption August 1996, Cyber-Rights & Cyber-Liberties (UK) at http://www.leeds.ac.uk/law/pgs/yaman/cryptog.htm. (Criminal Justice Studies of the Law Faculty of University of Leeds, Leeds LS2 9JT.)


Most passwords are subjected to some form of cryptography. That is, passwords are encrypted. To illustrate this process, let me reduce it to its most fundamental. Imagine that you created your own code, where each letter of the alphabet corresponded to a number.

Draw a table, or legend. Below each letter is a corresponding number. Thus, A = 7, B = 2, and so forth. This is a code of sorts, similar to the kind seen in secret-decoder kits found by children in their cereal boxes. You probably remember them: They came with decoder rings and sometimes even included a tiny code book for breaking the code manually.

Unfortunately, such a code can be easily broken. For example, if each letter has a fixed numeric counterpart (that is, that counterpart never changes), it means that you will only be using 26 different numbers (presumably 1 through 26, although you could choose numbers arbitrarily). Assume that the message you are seeking to hide contains letters but no numbers. Lexical analysis would reveal your code within a few seconds. There are software programs that perform such analysis at high speed, searching for patterns common to your language.


Another method (slightly more complex) is where each letter becomes another letter, based on a standard, incremental (or decremental) operation. To demonstrate this technique, I will defer to ROT-13 encoding. ROT-13 is a method whereby each letter is replaced by a substitute letter. The substitute letter is derived by moving 13 letters ahead.

This, too, is an ineffective method of encoding or encrypting a message (although it reportedly worked in Roman times for Caesar, who used a shift-by-three formula). There are programs that quickly identify this pattern. However, this does not mean that techniques like ROT-13 are useless. I want to illustrate why and, in the process, I can demonstrate the first important point about passwords and encryption generally:

Any form of encryption may be useful, given particular circumstances. These circumstances may depend upon time, the sensitivity of the information, and from whom you want to hide data.

In other words, techniques like the ROT-13 implementation may be quite useful under certain circumstances. Here is an example: Suppose a user wants to post a cracking technique to a Usenet group. He or she has found a hole and wants to publicize it while it is still exploitable. Fine. To prevent bona-fide security specialists from discovering that hole as quickly as crackers, ROT-13 can be used.

Remember how I pointed out that groups like NCSA routinely download Usenet traffic on a wholesale basis? Many groups also use popular search engines to ferret out cracker techniques. These search engines primarily employ regex (regular expression) searches (that is, they search by word or phrase). For example, the searching party (perhaps NCSA, perhaps any interested party) may enter a combination of words such as


When this combination of words is entered correctly, a wealth of information emerges. Correctly might mean many things; each engine works slightly differently. For example, some render incisive results if the words are enclosed in quotation marks. This sometimes forces a search that is case sensitive. Equally, many engines provide for the use of different Boolean expressions. Some even provide fuzzy-logic searches or the capability to mark whether a word appears adjacent, before, or after another word or expression.

When the cracker applies the ROT-13 algorithm to a message, such search engines will miss the post. For example, the message

Guvf zrffntr jnf rapbqrq va EBG-13 pbqvat. Obl, qvq vg ybbx fperjl hagvy jr haeniryrq vg!

is clearly beyond the reach of the average search engine. What it really looks like is this:

This message was encoded in ROT-13 coding. Boy, did it look screwy until we unraveled it!

Most modern mail and newsreaders support ROT-13 encoding and decoding (Free Agent by Forte is one; Netscape Navigator's Mail package is another). Again, this is a very simple form of encoding something, but it demonstrates the concept. Now, let's get a bit more specific.

DES and Crypt

Many different operating systems are on the Internet. The majority of servers, however, run some form of UNIX. On the UNIX platform, all user login IDs and passwords are stored in a central location. That location, for many years, was in the directory /etc within a file passwd (/etc/passwd). The format of this file contains various fields. Of those, we are concerned with two: the login ID and the password.

The login ID is stored plain text, or in perfectly readable English. (This is used as a key for encryption.) The password is stored in an encrypted form. The encryption process is performed using Crypt(3), a program based on the data encryption standard (DES). IBM developed the earliest version of DES; today, it is used on all UNIX platforms for password encryption. DES is endorsed jointly by the National Bureau of Standards and the National Security Agency. In fact, since 1977, DES has been the generally accepted method for safeguarding sensitive data.

DES was developed primarily for the protection of certain nonclassified information that might exist in federal offices. As set forth in Federal Information Processing Standards Publication 74, Guidelines for Implementing and Using the NBS Data Encryption Standard:

Because of the unavailability of general cryptographic technology outside the national security arena, and because security provisions, including encryption, were needed in unclassified applications involving Federal Government computer systems, NBS initiated a computer security program in 1973 which included the development of a standard for computer data encryption. Since Federal standards impact on the private sector, NBS solicited the interest and cooperation of industry and user communities in this work.

Information about the original mechanical development of DES is scarce. Reportedly, at the request of the National Security Agency, IBM caused certain documents to be classified. (They will likely remain so for some years to come.) However, the source code for Crypt(3) (the currently implementation of DES in UNIX) is widely available. This is significant, because in all the years that source has been available for Crypt, no one has yet found a way to easily reverse-encode information encrypted with it.

TIP: Want to try your luck at cracking Crypt? Get the source! It comes with the standard GNU distribution of C libraries, which can be found at ftp://gatekeeper.dec.com/glibc-1.09.1.tar.gz. (Please note that if you are not on U.S. soil or within U.S. jurisdiction, you must download the source for Crypt from a site outside the United States. The site usually given for this is ftp://ftp.uni-c.dk./glibc-1.09-crypt.tar.z.

Certain implementations of Crypt work differently. In general, however, the process is as follows:

1. Your password is taken in plain text (or, in cryptographic jargon, clear text).

2. Your password is then utilized as a key to encrypt a series of zeros (64 in all). The resulting encoded text is thereafter referred to as cipher text, the unreadable material that results after plain text has been encrypted.

Certain versions of Crypt, notably Crypt(3), take additional steps. For example, after going through this process, it encrypts the already encrypted text, again using your password as a key. This a fairly strong method of encryption; it is extremely difficult to break.

In brief, DES takes submitted data and encodes it using a one-way operation sometimes referred to as a hash. This operation is special from a mathematical point of view for one reason: While it is relatively simple to encode data this way, decoding it is computationally complex and resource intensive. It is estimated, for example, that the same password can be encoded in 4,096 different ways. The average user, without any knowledge of the system, could probably spend his or her entire life attempting to crack DES and never be successful. To get that in proper perspective, examine an estimate from the National Institute of Standards and Technology:

The cryptographic algorithm [DES] transforms a 64-bit binary value into a unique 64-bit binary value based on a 56-bit variable. If the complete 64-bit input is used (i.e., none of the input bits should be predetermined from block to block) and if the 56-bit variable is randomly chosen, no technique other than trying all possible keys using known input and output for the DES will guarantee finding the chosen key. As there are over 70,000,000,000,000,000 (seventy quadrillion) possible keys of 56 bits, the feasibility of deriving a particular key in this way is extremely unlikely in typical threat environments.4

4NIST, December 30, 1993. "Data Encryption Standard (DES)," Federal Information Processing Standards Publication 46-2. http://csrc.nist.gov/fips/fips46-2.txt.


One would think that DES is entirely infallible. It isn't. Although the information cannot be reverse-encoded, passwords encrypted via DES can be revealed through a comparative process. The process works as follows:

1. You obtain a dictionary file, which is really no more than a flat file (plain text) list of words (these are commonly referred to as wordlists).

2. These words are fed through any number of programs that encrypt each word. Such encryption conforms to the DES standard.

3. Each resulting encrypted word is compared with the target password. If a match occurs, there is better than a 90 percent chance that the password was cracked.

This in itself is amazing; nevertheless, password-cracking programs made for this purpose are even more amazing than they initially appear. For example, such cracking programs often subject each word to a list of rules. A rule could be anything, any manner in which a word might appear. Typical rules might include

	Alternate upper- and lowercase lettering.
	Spell the word forward and then backward, and then fuse the two results (for example: cannac).
	Add the number 1 to the beginning and/or end of each word.

Naturally, the more rules one applies to the words, the longer the cracking process takes. However, more rules also guarantee a higher likelihood of success. This is so for a number of reasons:

	The UNIX file system is case sensitive (WORKSTATION is interpreted differently than Workstation or workstation). That alone makes a UNIX password infinitely more complex to crack than a password generated on a DOS/Windows 
	Alternating letters and numbers in passwords is a common practice by those aware of security issues. When cracking passwords from such a source, many rules should be applied.

The emergence of such programs has greatly altered the security of the Internet. The reasons can be easily understood by anyone. One reason is because such tools are effective:

Crypt uses the resistance of DES to known plain text attack and make it computationally unfeasible to determine the original password that produced a given encrypted password by exhaustive search. The only publicly known technique that may reveal certain passwords is password guessing: passing large wordlists through the crypt function to see if any match the encrypted password entries in an /etc/passwd file. Our experience is that this type of attack is successful unless explicit steps are taken to thwart it. Generally we find 30 percent of the passwords on previously unsecured systems.5

5David Feldmeier and Philip R. Karn. UNIX Password Security--Ten Years Later. (Bellcore).

Another reason is that the passwords on many systems remain available. In other words, for many years, the task of the cracker was nearly over if he or she could obtain that /etc/passwd file. When in possession of the encrypted passwords, a suitably powerful machine, and a cracking program, the cracker was ready to crack (provided, of course, that he or she had good wordlists).

Wordlists are generally constructed with one word per line, in plain text, and using no carriage returns. They average at about 1MB each (although one could feasibly create a wordlist some 20MB in size). As you may have guessed, many wordlists are available on the Internet; these come in a wide variety of languages (thus, an American cracker can crack an Italian machine and vice versa).

Cross Reference: There are a few popular depositories for wordlists. These collections contain every imaginable type of wordlist. Some are simply dictionaries and others contain hyphenated words, upper and lower case, and so on. One exceptionally good source is at http://sdg.ncsa.uiuc.edu/~mag/Misc/Wordlists.html. However, perhaps the most definitive collection is available at the COAST project at Purdue. Its page is located at http://www.cs.purdue.edu/coast/.

The Password-Cracking Process

Hardware Issues

A 66MHz machine or higher is typical. Indeed, it is a basic requirement. Without delving deep into an argument for this or that processor (or this or that platform), I should at least state this: In actual practice, cracking a large password file is a CPU- and memory-intensive task. It can often take days. Whether you are a hobbyist, cracker, or system administrator, you would be well advised to take note of this point. Before actually cracking a large password file, you might want to inventory your equipment and resources.

I have found that to perform a successful (and comfortable) crack of a large password file, one should have 66MHz of processing power and 32MB of RAM (or better). It can be done with less, even a 25MHz processor and 8MB of RAM. However, if you use a machine so configured, you cannot expect to use it for any other tasks. (At least, this is true of any IBM AT compatible. I have seen this done on a Sun SPARCstation 1 and the user was still able to run other processes, even in OpenWindows.)

Equally, there are techniques for overcoming this problem. One is the parlor trick of distributed cracking. Distributed cracking is where the cracker runs the cracking program in parallel, on separate processors. There are a few ways to do this. One is to break the password file into pieces and crack those pieces on separate machines. In this way, the job is distributed among a series of workstations, thus cutting resource drain and the time it takes to crack the entire file.

The problem with distributed cracking is that it makes a lot of noise. Remember the Randal Schwartz case? Mr. Schwartz probably would never have been discovered if he were not distributing the CPU load. Another system administrator noticed the heavy processor power being eaten. (He also noted that one process had been running for more than a day.) Thus, distributed cracking really isn't viable for crackers unless they are the administrator of a site or they have a network at home (which is not so unusual these days; I have a network at home that consists of Windows 95, Windows NT, Linux, Sun, and Novell boxes).

The Mechanics of Password Cracking

In any event the wordlist is sent through the encryption process, generally one word at a time. Rules are applied to the word and, after each such application, the word is again compared to the target password (which is also encrypted). If no match occurs, the next word is sent through the process.

Some password crackers perform this task differently. Some take the entire list of words, apply a rule, and from this derive their next list. This list is then encrypted and matched against the target password. The difference is not academic. The second technique is probably much faster.

In the final stage, if a match occurs, the password is then deemed cracked. The plain-text word is then piped to a file (recorded in a plain-text file for later examination).

It is of some significance that the majority of password cracking utilities are not user friendly. In fact, when executed, some of them forward nothing more than a cryptic message, such as


Most also do not have extensive documentation with them. There are a few reasons for this phenomenon:


The Password Crackers

The remainder of this chapter is devoted to individual password crackers. Some are made for cracking UNIX passwd files, and some are not. Some of the tools here are not even password crackers; instead, they are auxiliary utilities that can be used in conjunction with (or for the improvement of) existing password crackers.

Crack by Alec Muffett

Crack is probably the most celebrated tool for cracking encrypted UNIX passwords. It is now the industry standard for checking networks for characteristically weak passwords. It was written by Alec D. E. Muffet, a UNIX software engineer in Wales. In the docs provided with the distribution, Mr. Muffett concisely articulates the program's purpose:

Crack is a freely available program designed to find standard UNIX eight-character DES encrypted passwords by standard guessing techniques...It is written to be flexible, configurable and fast, and to be able to make use of several networked hosts via the Berkeley rsh program (or similar), where possible.

Crack is for use on UNIX platforms only. It comes as a tarred, g'zipped file and is available at so many sites, I will refrain from listing them here (use the search string crack-4.1.tar.gz or crack-4.1.tar.Z). After downloaded to the local disk, it is unzipped and untarred into a suitable directory (I prefer putting it into the /root/ directory tree).

To get up and running, you need only set the root directory for Crack (this is the directory beneath which all the Crack resources can be found). This value is assigned to a variable (Crack_Home) in the configuration files. This is merely an environment variable that, when set, tells the Crack program where the remaining resources reside. To set this variable, edit the file Crack, which is a /bin/sh script that starts up the Crack engine. After editing this file, you can begin. This file, which consists of plain-text commands, code, and variables, can be edited in any text editor or word processor. However, it must be saved to plain text.

NOTE: You may or may not need to quickly acquire a wordlist. As it happens, many distributions of Crack are accompanied by sample wordlist (or dictionary) files. Your mileage may vary in this respect. I would suggest getting your copy of Crack from established (as opposed to underground) sites. This will make it more likely that you will get a sample wordlist (although to do any serious password cracking, you will need to acquire bigger and more suitable wordlists).

You initiate a Crack session by calling the program and providing the name of a password file and any command-line arguments, including specifications for using multiple workstations and such. If you refer to Xterm, you will see a file there named my_password_file. This is a sample passwd file that I cracked to generate an example. To crack that file, I issued the following command:

Crack my_password_file

Crack started the process and wrote the progress of the operation to files with an out prefix. In this case, the file was called outSamsHack300. Following is an excerpt from that file; examine it closely.

pwc: Jan 30 19:26:49 Crack v4.1f: The Password Cracker, (c) Alec D.E. Muffett, 1992
pwc: Jan 30 19:26:49 Loading Data, host=SamsHack pid=300
pwc: Jan 30 19:26:49 Loaded 2 password entries with 2 different (salts: 100%
pwc: Jan 30 19:26:49 Loaded 240 rules from `Scripts/dicts.rules'.
pwc: Jan 30 19:26:49 Loaded 74 rules from `Scripts/gecos.rules'.
pwc: Jan 30 19:26:49 Starting pass 1 - password information
pwc: Jan 30 19:26:49 FeedBack: 0 users done, 2 users left to crack.
pwc: Jan 30 19:26:49 Starting pass 2 - dictionary words
pwc: Jan 30 19:26:49 Applying rule `!?Al' to file `Dicts/bigdict'
pwc: Jan 30 19:26:50 Rejected 12492 words on loading, 89160 words (left to sort
pwc: Jan 30 19:26:51 Sort discarded 947 words; FINAL DICTIONARY (SIZE: 88213
pwc: Jan 30 19:27:41 Guessed ROOT PASSWORD root (/bin/bash (in my_password_file) [laura] EYFu7c842Bcus
pwc: Jan 30 19:27:41 Closing feedback file.

As you can see, Crack guessed the correct password for root. This process took just under a minute. Line 1 reveals the time at which the process was initiated (Jan 30 19:26:49); line 12 reveals that the password--Laura--was cracked at 19:27:41. This was done using a 133MHz processor and 32MB of RAM.

Because the password file I used was so small, neither time nor resources was an issue. In practice, however, if you are cracking a file with hundreds of entries, Crack will eat resources voraciously. This is especially so if you are using multiple wordlists that are in compressed form. (Crack will actually identify these as compressed files and will uncompress them.)

As mentioned earlier, Crack can distribute the work to different workstations on a UNIX network. Even more extraordinary than this, the machines can be of different architectures. Thus, you might have an IBM-compatible running Linux, a RS/6000 running AIX, and a Macintosh running A/UX.

Crack is extremely lightweight and is probably the most reliable password cracker available.

TIP: To perform a networked cracking session, you must build a network.conf file. This is used by the program to identify which hosts to network, their architecture, and other key variables. One can also specify command-line options that are invoked as Crack is unleashed on each machine. In other words, each machine may be running Crack and using different command-line options. This can be conveniently managed from one machine.

Cross Reference: Macintosh users can also enjoy the speed and efficiency of Crack by using the most recent port of it, called MacKrack v2.01b1. It is available at http://www.borg.com/~docrain/mac-hack.html.

CrackerJack by Jackal

CrackerJack is a renowned UNIX password cracker designed expressly for the DOS platform. Contrary to popular notions, CrackerJack is not a straight port of Crack (not even close). Nevertheless, CrackerJack is an extremely fast and easy-to-use cracking utility. For several years, CrackerJack has been the choice for DOS users; although many other cracker utilities have cropped up, CrackerJack remains quite popular (it's a cult thing). Later versions were reportedly compiled using GNU C and C++. CrackerJack's author reports that through this recompiling process, the program gained noticeable speed.

TIP: CrackerJack also now works on the OS/2 platform.

The are some noticeable drawbacks to CrackerJack, including


Despite these snags, CrackerJack is reliable and, for moderate tasks, requires only limited resources. It takes sparse processor power, doesn't require a windowed environment, and can run from a floppy.

Cross Reference: CrackerJack is widely available, although not as widely as one would expect. Here are a few reliable sites:

PaceCrack95 (pacemkr@bluemoon.net)

PaceCrack95 is designed to work on the Windows 95 platform in console mode, in a shell window. Its author reports that PaceCrack95 was prompted by deficiencies in other DOS-based crackers. He writes:

Well you might be wondering why I have written a program like this when there already is [sic] many out there that do the same thing. There are many reasons, I wanted to challenge myself and this was a useful way to do it. Also there was this guy (Borris) that kept bugging me to make this for him because Cracker Jack (By Jackal) doesn't run in Win95/NT because of the weird way it uses the memory. What was needed was a program that runs in Win95 and the speed of the cracking was up there with Cracker Jack.

To the author's credit, he created a program that does just that. It is fast, compact, and efficient. Unfortunately, however, PaceCrack95 is a new development not yet widely available (I believe it was distributed in July 1996).

Cross Reference: There is a shortage of reliable sites from which to retrieve PaceCrack95, but it can be found at http://tms.netrom.com/~cassidy/crack.htm.

Qcrack by the Crypt Keeper

Qcrack was originally designed for use on the Linux platform. It has recently been ported to the MS-DOS/Windows platform (reportedly sometime in July 1996). Qcrack is therefore among the newest wave of password crackers that have cropped up in the last year or so. This has increased the number of choices in the void. This utility is extremely fast, but there are some major drawbacks. One relates to storage. As the author, the Crypt Keeper, explains:

QInit [one of several binaries in the distribution] generates a hash table where each entry corresponds to a salt value and contains the first two bytes of the hash. Each password becomes about 4KB worth of data, so this file gets large quickly. A file with 5000 words can be expected to be 20MB of disk. This makes it important to have both a lot of disk space, and a very select dictionary. Included, a file called cpw is a list containing what I consider to be "good" words for the typical account. I have had zero hits with this file on some password files, and I have also had almost a 30 percent hit rate on others.

NOTE: Note that Qcrack is a bit slower than some other utilities of this nature, but is probably worth it. Parallelizing is possible, but not in the true sense. Basically, one can use different machines and use different dictionaries (as Qcrack's author suggests). However, this is not the same form of parallelizing that can be implemented with Muffett's Crack. (Not to split hairs, but using Qcrack in this fashion will greatly speed up the process of the crack.)

Just one more interesting tidbit: The author of Qcrack, in a stroke of vision, suggested that someone create a CD-ROM of nothing but wordlist dictionaries (granted, this would probably be of less use to those with slow CD-ROMs; repeated access across drives could slow the system a bit).

Cross Reference: Qcrack can be found in the following places:

John the Ripper by Solar Designer

John the Ripper is a relatively new UNIX password cracker that runs on the DOS/Windows 95 platform. The binary distribution suggests that the coding was finished in December 1996. Early distributions of this program were buggy. Those of you working with less than 4MB of RAM might want to avoid this utility. Its author suggests that the program can run with less than 4MB, but a lot of disk access will be going on.

Cross Reference: John the Ripper runs on Linux as well. The Linux version is currently in beta and is being distributed as an ELF binary. It can be found by searching for the string john-linux.tar.zip.

Undoubtedly, these early efforts were flawed because the author attempted to include so many functions. Although John the Ripper may not yet be perfect, it is sizing up as quite a program. It runs in DOS (or in Windows 95 via a shell window) and has extensive options.

In this respect, John incorporates many of the amenities and necessities of other, more established programs. I fully expect that within six months of this writing, John the Ripper will be among the most popular cracking utilities.

Cross Reference: The DOS version of John the Ripper, which is relatively large in terms of password crackers, can be found at http://tms.netrom.com/~cassidy/crack.htm.

Pcrack (PerlCrack; Current Version Is 0.3) by Offspring and Nave

Pcrack is a Perl script for use on the UNIX platform (this does not mean that Pcrack couldn't be implemented on the NT platform; it simply means that some heavy-duty porting would be in order). This utility has its advantages because it is quite compact and, when loaded onto the interpreter, fast. Nonetheless, one must obviously have not only some form of UNIX, but also access to Perl. As I have already pointed out, such utilities are best employed by someone with root access to a UNIX box. Many system administrators have undertaken the practice of restricting Perl access these days.

Cross Reference: Pcrack is not widely available, but http://tms.netrom.com/~cassidy/crack.htm appears to be a reliable source.

Hades by Remote and Zabkar (?)

Hades is yet another cracking utility that reveals UNIX /etc/passwd passwords. Or is it? Hades is very fast, faster than Muffett's Crack and far faster than CrackerJack (at least in tests I have performed).

The distribution comes with some source code and manual pages, as well as an advisory, which I quote here:

We created the Hades Password Cracker to show that world-readable encrypted passwords in /etc/passwd are a major vulnerability of the UNIX operating system and its derivatives. This program can be used by system operators to discover weak passwords and disable them, in order to make the system more secure.

With the exception of Muffett's Crack, Hades is the most well-documented password cracker available. The authors have taken exceptional care to provide you with every possible amenity. The Hades distribution consists of a series of small utilities that, when employed together, formulate a powerful cracking suite. For each such utility, a man (manual) page exists. The individual utilities included with the distribution perform the following functions:

Cross Reference: Hades is so widely available that I will refrain from giving a list of sites here. Users who wish to try out this well-crafted utility should search for one or both of the following search terms:

Star Cracker by the Sorcerer

Star Cracker was designed to work under the DOS4GW environment. Okay...this particular utility is a bit of a curiosity. The author was extremely thorough, and although the features he or she added are of great value and interest, one wonders when the author takes out time to have fun. In any event, here are some of the more curious features:


To UNIX users, this second amenity doesn't mean much. UNIX users have always had the ability to time jobs. However, on the DOS platform, this capability has been varied and scarce (although there are utilities, such as tm, that can schedule jobs).

Moreover, this cracking utility has a menu of options: functions that make the cracking process a lot easier. You've really got to see this one to believe it. A nicely done job.

Cross Reference: Star Cracker is available at http://citus.speednet.com.au/~ramms/.

Killer Cracker by Doctor Dissector

Killer Cracker is another fairly famous cracking engine. It is distributed almost always as source code. The package compiles without event on a number of different operating systems, although I would argue that it works best under UNIX.

NOTE: Unless you obtain a binary release, you will need a C compiler.

Killer Cracker has so many command-line options, it is difficult to know which ones to mention here. Nonetheless, here are a few highlights of this highly portable and efficient cracking tool:

In all, this program is quite complete. Perhaps that is why it remains so popular. It has been ported to the Macintosh operating system, it works on a DOS system, and it was designed under UNIX. It is portable and easily compiled.

Cross Reference: Killer Cracker can be obtained at these locations:

Hellfire Cracker by the Racketeer and the Presence

Another grass-roots work, Hellfire Cracker is a utility for cracking UNIX password files using the DOS platform. It was developed using the GNU compiler. This utility is quite fast, although not by virtue of the encryption engine. Its major drawback is that user-friendly functions are practically nonexistent. Nevertheless, it makes up for this in speed and efficiency.

One amenity of Hellfire is that it is now distributed almost exclusively in binary form, which obviates the need for a C compiler.

Cross Reference: This utility can be found on many sites, but I have encountered problems finding reliable ones. This one, however is reliable: http://www.ilf.net/~toast/files/.

XIT by Roche'Crypt

XIT is yet another UNIX /etc/passwd file cracker, but it is a good one. Distinguishing characteristics include


The Claymore utility has been around for several years. However, it is not as widely available as one would expect. It also comes in different compressed formats, although the greater number are zipped.

Cross Reference: One reliable place to find XIT is http://www.ilf.net/~toast/files/xit20.zip.

Claymore by the Grenadier

The Claymore utility is slightly different from its counterparts. It runs on any Windows platform, including 95 and NT.

NOTE: Claymore does not work in DOS or even a DOS shell window.

There is not a lot to this utility, but some amenities are worth mentioning. First, Claymore can be used as a brute force cracker for many systems. It can be used to crack UNIX /etc/passwd files, but it can also be used to crack other types of programs (including those requiring a login/password pair to get in).

One rather comical aspect of this brute force cracker is its overzealousness. According to the author:

Keep an eye on the computer. Claymore will keep entering passwords even after it has broken through. Also remember that many times a wrong password will make the computer beep so you may want to silence the speaker. Sometimes Claymore will throw out key strokes faster than the other program can except them. In these cases tell Claymore to repeat a certain key stroke, that has no other function in the target program, over and over again so that Claymore is slowed down and the attacked program has time to catch up.

This is what I would classify as a true, brute-force cracking utility! One interesting aspect is this: You can specify that the program send control and other nonprintable characters during the crack. The structure of the syntax to do so suggests that Claymore was written in Microsoft Visual Basic. Moreover, one almost immediately draws the conclusion that the VB function SendKeys plays a big part of this application. In any event, it works extremely well.

Cross Reference: Claymore is available at many locations on the Internet, but http://www.ilf.net/~toast/files/claym10.zip is almost guaranteed to be available.

Guess by Christian Beaumont

Guess is a compact, simple application designed to attack UNIX /etc/passwd files. It is presented with style but not much pomp. The interface is designed for DOS, but will successfully run through a DOS windowed shell. Of main interest is the source, which is included with the binary distribution. Guess was created sometime in 1991, it seems. For some reason, it has not yet gained the notoriety of its counterparts; this is strange, for it works well.

Cross Reference: Guess is available widely, so I will refrain from listing locations here. It is easy enough to find; use the search string guess.zip.

PC UNIX Password Cracker by Doctor Dissector

I have included the PC UNIX Password Cracker utility (which runs on the DOS platform) primarily for historical reasons. First, it was released sometime in 1990. As such, it includes support not only for 386 and 286 machines, but for 8086 machines. (That's right. Got an old XT lying around the house? Put it to good use and crack some passwords!) I won't dwell on this utility, but I will say this: The program is extremely well designed and has innumerable command-line options. Naturally, you will probably want something a bit more up to date (perhaps other work of the good Doctor's) but if you really do have an old XT, this is for you.

Cross Reference: PC UNIX Cracker can be found at http://www.ilf.net/~toast/files/pwcrackers/pcupc201.zip.

Merlin by Computer Incident Advisory Capability (CIAC) DOE

Merlin is not a password cracker. Rather, it is a tool for managing password crackers as well as scanners, audit tools, and other security-related utilities. In short, it is a fairly sophisticated tool for holistic management of the security process.

Merlin is for UNIX platforms only. It has reportedly been tested (with positive results) on a number of flavors, including but not limited to IRIX, Linux, SunOS, Solaris, and HP-UX.

One of the main attractions of Merlin is this: Although it has been specifically designed to support only five common security tools, it is highly extensible (it is written in Perl almost exclusively). Thus, one could conceivably incorporate any number of tools into the scheme of the program.

Merlin is a wonderful tool for integrating a handful of command-line tools into a single, easily managed package. It addresses the fact that the majority of UNIX-based security programs are based in the command-line interface (CLI). The five applications supported are

Note that Merlin does not supply any of these utilities in the distribution. Rather, you must acquire these programs and then configure Merlin to work with them (similar to the way one configures external viewers and helpers in Netscape's Navigator). The concept may seem lame, but the tool provides an easy, centralized point from which to perform some fairly common (and grueling) security tasks. In other words, Merlin is more than a bogus front-end. In my opinion, it is a good contribution to the security trade.

TIP: Those who are new to the UNIX platform may have to do a little hacking to get Merlin working. For example, Merlin relies on you to have correctly configured your browser to properly handle *.pl files (it goes without saying that Perl is one requisite). Also, Merlin apparently runs an internal HTTP server and looks for connections from the local host. This means you must have your system properly configured for loopback.

Merlin (and programs like it) are an important and increasing trend (a trend kicked off by Farmer and Venema). Because such programs are designed primarily in an HTML/Perl base, they are highly portable to various platforms in the UNIX community. They also tend to take slim network resources and, after the code has been loaded into the interpreter, they move pretty fast. Finally, these tools are easier to use, making security less of an insurmountable task. The data is right there and easily manipulated. This can only help strengthen security and provide newbies with an education.

Other Types of Password Crackers

Now you'll venture into more exotic areas. Here you will find a wide variety of password crackers for almost any type of system or application.

ZipCrack by Michael A. Quinlan

ZipCrack does just what you would think it would: It is designed to brute-force passwords that have been applied to files with a *.zip extension (in other words, it cracks the password on files generated with PKZIP).

No docs are included in the distribution (at least, not the few files that I have examined), but I am not sure there is any need. The program is straightforward. You simply provide the target file, and the program does the rest.

The program was written in Turbo Pascal, and the source code is included with the distribution. ZipCrack will work on any IBM-compatible that is a 286 or higher. The file description reports that ZipCrack will crack all those passwords generated by PKZIP 2.0. The author also warns that although short passwords can be obtained within a reasonable length of time, long passwords can take "centuries." Nevertheless, I sincerely doubt that many individuals provide passwords longer than five characters. ZipCrack is a useful utility for the average toolbox; it's one of those utilities that you think you will never need and later, at 3:00 in the morning, you swear bitterly because you don't have it.

Cross Reference: ZipCrack is widely available; use the search string zipcrk10.zip.

Fast Zip 2.0 (Author Unknown)

Fast Zip 2.0 is, essentially, identical to ZipCrack. It cracks zipped passwords.

Cross Reference: To find Fast Zip 2.0, use the search string fzc101.zip.

Decrypt by Gabriel Fineman

An obscure but nonetheless interesting utility, Decrypt breaks WordPerfect passwords. It is written in BASIC and works well. The program is not perfect, but it is successful a good deal of the time. The author reports that Decrypt checks for passwords with keys from 1 through 23. The program was released in 1993 and is widely available.

Cross Reference: To find Decrypt, use the search string decrypt.zip.

Glide (Author Unknown)

There is not a lot of documentation with the Glide utility. This program is used exclusively to crack PWL files, which are password files generated in Microsoft Windows for Workgroups and later versions of Windows. The lack of documentation, I think, is forgivable. The C source is included with the distribution. For anyone who hacks or cracks Microsoft Windows boxes, this utility is a must.

Cross Reference: Glide is available at these locations:

AMI Decode (Author Unknown)

The AMI Decode utility is designed expressly to grab the CMOS password from any machine using an American Megatrends BIOS. Before you go searching for this utility, you might try the factory-default CMOS password. It is, oddly enough, AMI. In any event, the program works, and that is what counts.

Cross Reference: To find AMI Decode, use the search string amidecod.zip.

NetCrack by James O'Kane

NetCrack is an interesting utility for use on the Novell NetWare platform. It applies a brute-force attack against the bindery. It's slow, but still quite reliable.

Cross Reference: To find NetCrack, use the search string netcrack.zip.

PGPCrack by Mark Miller

Before readers who use PGP get worked up, a bit of background is in order. Pretty Good Privacy (PGP) is probably the strongest and most reliable encryption utility available to the public sector. Its author, Phil Zimmermann, sums it up as follows:

PGPTM uses public-key encryption to protect e-mail and data files. Communicate securely with people you've never met, with no secure channels needed for prior exchange of keys. PGP is well featured and fast, with sophisticated key management, digital signatures, data compression, and good ergonomic design.

PGP can apply a series of encryption techniques. One of these  is IDEA. To give you an idea of how difficult IDEA is to crack, here is an excerpt from the PGP Attack FAQ, authored by Route (an authority on encryption and a member of "The Guild," a hacker group):

If you had 1,000,000,000 machines that could try 1,000,000,000 keys/sec, it would still take all these machines longer than the universe as we know it has existed and then some, to find the key. IDEA, as far as present technology is concerned, is not vulnerable to brute-force attack, pure and simple.

In essence, a message encrypted using a 1024-bit key generated with a healthy and long passphrase is, for all purposes, unbreakable. So, why did Mr. Miller author this interesting tool? Because passphrases can be poorly chosen and, if a PGP-encrypted message is to be cracked, the passphrase is a good place to start. Miller reports:

On a 486/66DX, I found that it takes about 7 seconds to read in a 1.2 megabyte passphrase file and try to decrypt the file using every passphrase. Considering the fact that the NSA, other government agencies, and large corporations have an incredible amount of computing power, the benefit of using a large, random passphrase is quite obvious.

Is this utility of any use? It is quite promising. Miller includes the source with the distribution as well as a file of possible passphrases (I have found at least one of those passphrases to be one I have used). The program is written in C and runs in the DOS, UNIX, and OS/2 environments.

Cross Reference: PGPCrack is available at several, reliable locations, including

The ICS Toolkit by Richard Spillman

The ICS Toolkit utility is an all-purpose utility for studying Cryptanalysis. It runs well in Microsoft Windows 3.11 but is more difficult to use in Windows 95 or Windows NT. It uses an older version of VBRUN300.DLL and therefore, users with later versions would be wise to move the newer copy to a temporary directory. (The ICS application will not install unless it can place its version of VBRUN300.DLL into the c:\windows\system directory.) This utility will help you learn how ciphers are created and how to break them. It is really quite comprehensive, although it takes some ingenuity to set up. It was programmed for older versions of Microsoft Windows. The interface is more utilitarian than attractive.

EXCrack by John E. Kuslich

The EXCrack utility recovers passwords applied in the Microsoft Excel environment. Mr. Kuslich is very clear that this software is not free but licensable (and copyrighted); therefore, I have neglected to provide screenshots or quoted information. It's safe to say the utility works well.

Cross Reference: To find EXCrack, use the search string excrak.zip.

CP.EXE by Lyal Collins

CP.EXE recovers or cracks passwords for CompuServe that are generated in CISNAV and WINCIM. It reportedly works on DOSCIM passwords as well. It a fast and reliable way to test whether your password is vulnerable to attack.

Cross Reference: This utility has been widely distributed and can be found by issuing the search string cis_pw.zip.

Password NT by Midwestern Commerce, Inc.

The Password NT utility recovers, or cracks, administrator password files on the Microsoft Windows NT 3.51 platform. In this respect, it is the NT equivalent of any program that cracks the root account in UNIX. Note that some hacking is required to use this utility; if the original drive on which the target password is located is NTFS (and therefore access-control options are enabled), you will need to move the password to a drive that is not access-control protected. To do this, you must move the password to a drive also running 3.51 workstation or server. Therefore, this isn't really an instant solution. Nevertheless, after everything is properly set, it will take no time at all.

Cross Reference: A nicely done utility, Password NT is always available at the company's home page (http://www.omna.com/yes/AndyBaron/recovery.htm).

There are well over 100 other utilities of a similar character. I will refrain from listing them here. I think that the previous list is sufficient to get you started studying password security. At least you can use these utilities to test the relative strength of your passwords.


At this stage, I would like to address some concepts in password security, as well as give you sources for further education.

I hope that you will go to the Net and retrieve each of the papers I am about to cite. If you are serious about learning security, you will follow this pattern throughout this book. By following these references in the order they are presented, you will gain an instant education in password security. However, if your time is sparse, the following paragraphs will at least provide you with some insight into password security.

About UNIX Password Security

UNIX password security, when implemented correctly, is fairly reliable. The problem is that people pick weak passwords. Unfortunately, because UNIX is a multi-user system, every user with a weak password represents a risk to the remaining users. This is a problem that must be addressed:

It is of utmost importance that all users on a system choose a password that is not easy to guess. The security of each individual user is important to the security of the whole system. Users often have no idea how a multi-user system works and don't realize that they, by choosing an easy-to-remember password, indirectly make it possible for an outsider to manipulate the entire system.6

6Walter Belgers, UNIX Password Security. December 6, 1993.


TIP: The above-mentioned paper, UNIX Password Security, gives an excellent overview of exactly how DES works into the UNIX password scheme. This includes a schematic that shows the actual process of encryption using DES. For users new to security, this is an excellent starting point.

Cross Reference: Locate UNIX Password Security by entering the search string password.ps.

What are weak passwords? Characteristically, they are anything that might occur in a dictionary. Moreover, proper names are poor choices for passwords. However, there is no need to theorize on what passwords are easily cracked. Safe to say, if the password appears in a password cracking wordlist available on the Internet, the password is no good. So, instead of wondering, get yourself a few lists.

Cross Reference: Start your search for wordlists at http://sdg.ncsa.uiuc.edu/~mag/Misc/Wordlists.html.

By regularly checking the strength of the passwords on your network, you can ensure that crackers cannot penetrate it (at least not through exploiting bad password choices). Such a regimen can greatly improve your system security. In fact, many ISPs and other sites are now employing tools that check a user's password when it is first created. This basically implements the philosophy that

...the best solution to the problem of having easily guessed passwords on a system is to prevent them from getting on the system in the first place. If a program such as a password cracker reacts by guessing detectable passwords already in place, then although the security hole is found, the hole existed for as long as the program took to detect it...If however, the program which changes users' passwords...checks for the safety and guessability before that password is associated with the user's account, then the security hole is never put in place.7

7Matthew Bishop, UC Davis, California, and Daniel Klein, LoneWolf Systems Inc. "Improving System Security via Proactive Password Checking." (Appeared in Computers and Security [14, pp. 233-249], 1995.)

TIP: This paper is probably one of the best case studies and treatments of easily-guessable passwords. It treats the subject in depth, illustrating real-life examples of various passwords that one would think are secure but actually are not.

Cross Reference: Locate Improving System Security via Proactive Password Checking by entering the search string bk95.ps.

NOTE: As you go along, you will see many of these files have a *.ps extension. This signifies a PostScript file. PostScript is a language and method of preparing documents. It was created by Adobe, the makers of Acrobat and Photoshop.
To read a PostScript file, you need a viewer. One very good one is Ghostscript, which is shareware and can be found at http://www.cs.wisc.edu/~ghost/.

Another good package (and a little more lightweight) is a utility called Rops. Rops is available for Windows and is located here:

Other papers of importance include the following:

"Observing Reusable Password Choices"

Purdue Technical Report CSD-TR 92-049

Eugene H. Spafford

Department of Computer Sciences, Purdue University

Date: July 3, 1992

Search String: Observe.ps

"Password Security: A Case History"

Robert Morris and Ken Thompson

Bell Laboratories

Date: Unknown

Search String: pwstudy.ps

"Opus: Preventing Weak Password Choices"

Purdue Technical Report CSD-TR 92-028

Eugene H. Spafford

Department of Computer Sciences, Purdue University

Date: June 1991

Search String: opus.PS.gz

"Federal Information Processing Standards Publication 181"

Announcing the Standard for Automated Password Generator

Date: October 5, 1993

URL: http://www.alw.nih.gov/Security/FIRST/papers/password/fips181.txt

"Augmented Encrypted Key Exchange: A Password-Based Protocol Secure Against Dictionary Attacks and Password File Compromise"

Steven M. Bellovin and Michael Merrit

AT&T Bell Laboratories

Date: Unknown

Search String: aeke.ps

"A High-Speed Software Implementation of DES"

David C. Feldmeier

Computer Communication Research Group


Date: June 1989

Search String: des.ps

"Using Content Addressable Search Engines to Encrypt and Break DES"

Peter C. Wayner

Computer Science Department

Cornell University

Date: Unknown

Search String: desbreak.ps

"Encrypted Key Exchange: Password-Based Protocols Secure Against Dictionary Attacks"

Steven M. Bellovin and Michael Merrit

AT&T Bell Laboratories

Date: Unknown

Search String: neke.ps

"Computer Break-ins: A Case Study"

Leendert Van Doorn

Vrije Universiteit, The Netherlands

Date: Thursday, January 21, 1993

Search String: holland_case.ps

"Security Breaches: Five Recent Incidents at Columbia University"

Fuat Baran, Howard

Kaye, and Margarita Suarez

Center for Computing Activities

Colombia University

Date: June 27, 1990

Search String: columbia_incidents.ps

Other Sources and Documents

Following is a list of other resources. Some are not available on the Internet. However, there are articles that can be obtained through various online services (perhaps Uncover) or at your local library through interlibrary loan or through microfiche. You may have to search more aggressively for some of these, perhaps using the Library of Congress (locis.loc.gov) or perhaps an even more effective tool, like WorldCat (www.oclc.org).

"Undetectable Online Password Guessing Attacks"

Yun Ding and Patrick Horster,

OSR, 29(4), pp. 77-86

Date: October 1995

"Optimal Authentication Protocols Resistant to Password Guessing Attacks"

Li Gong

Stanford Research Institute

Computer Science Laboratory

Men Park, CA

Date: Unknown

Search String: optimal-pass.dvi or optimal-pass.ps

"A Password Authentication Scheme Based on Discrete Logarithms"

Tzong Chen Wu and Chin Chen Chang

International Journal of Computational Mathematics; Vol. 41, Number 1-2, pp. 31-37


"Differential Cryptanalysis of DES-like Cryptosystems"

Eli Biham and Adi Shamir

Journal of Cryptology, 4(1), pp. 3-72


"A Proposed Mode for Triple-DES Encryption"

Don Coppersmith, Don B. Johnson, and Stephen M. Matyas

IBM Journal of Research and Development, 40(2), pp. 253-262

March 1996

"An Experiment on DES Statistical Cryptanalysis"

Serve Vaudenay

Conference on Computer and Communications Security, pp. 139-147

ACM Press

March 1996

"Department of Defense Password Management Guideline"

If you want to gain a more historical perspective regarding password security, start with the Department of Defense Password Management Guideline. This document was produced by the Department of Defense Computer Security Center at Fort Meade, Maryland.

Cross Reference: You can find the Department of Defense Password Management Guideline at http://www.alw.nih.gov/Security/FIRST/papers/password/dodpwman.txt.


You have reached the end of this chapter, and I have only a few things left to say in closing. One point I want to make is this: password crackers are growing in number. Because these tools often take significant processing power, it is not unusual for crackers to crack a large and powerful site just so they can use the processor power available there. For example, if you can crack a network with, say, 800 workstations, you can use at least some of those machines to perform high-speed cracking. By distributing the workload to several of these machines, you can ensure a much quicker result.

Many people argue that there is no legitimate reason persuasive enough to warrant the creation of such tools. That view is untenable. Password crackers provide a valuable service to system administrators by alerting them of weak passwords on the network. The problem is not that password crackers exist; the problem is that they aren't used frequently enough by the good guys. I hope that this book heightens awareness of that fact.


 [Return to the Introduction]

[Return to the Hacker's HomePage]