Secure that Client!

In the last issue, I covered the basics of Internet connectivity and security for servers, touching on the use of firewalls to protect machines from attack over the Internet. In this article, I’m going to look at security of client machines, the most common form of deployed computing.

In many ways, securing a client computer is tougher than securing a server. With a server, only experienced individuals should have access to the network and software configuration on the machine; on the client, the operator will likely be an end-user with little or no computer security training. This is the reason many recent computer virus’ and worms have targeted the client for their infection vectors.

Server exploits can be thought of as a “push” methodology — an attacker (a human or a worm/virus) initiates the process, attaching to an exposed service’s TCP/IP port. Client attacks, on the other hand, are “pull”; the attacker must wait for the client to initiate a connection to a hostile server (in the case of a web browser exploit) or to fetch exploit data (for e-mail or application macro attacks). Keep in mind also that a client machine may also be configured as a server, adding to the potential exposures.

“I send you this file in order to have your advise.” The e-mail, appearing to come from someone the recipient likely knew, had an attached file which was randomly selected from the hard-drive of a victim of the SirCam worm. The file itself was modified to carry the SirCam worm as a payload, in addition to the original (potentially private or sensitive) contents.

Clicking on this attachment resulted in the recipient becoming a SirCam victim as well. The worm searches for e-mail addresses on all attached hard-drives, sending itself to all it finds (along with another randomly selected file). It also did other nasty things, like filling the hard-drive with junk and deleting files.

SirCam, LoveLetter, Code Red, Melisa, Valentine, et al, all rely on a human recipient to run them. They get around the old adage to never launch an attachment from someone you don’t know, by appearing to be from someone you do know. Most of them also rely on Outlook (or LookOut, as it’s known in security circles) “helping” the user, by hiding the .vbs or .exe extension which might tip off the recipient that they’re not dealing with just a document or image, but instead an executable program.

Nimda, the most recent worm to attack Windows machines, went several steps further. Leveraging on a bug in Internet Explorer (IE) 5.5 SP1 and earlier, Nimda could infect a client machine simply by having the user view the e-mail in a client (for example, Outlook) which used IE to view HTML e-mail — no launching of the attachment was needed.

Nimda could also infect other clients by way of file shares, as well as servers running un-patched MS Internet Information Server (IIS). Lastly, infected IIS machines could compromise un-patched IE browsers by including the worm’s payload in all HTML pages served by that server.

By using these multiple infection vectors, Nimda was particularly effective at spreading itself. By using both e-mail and web browsing, most firewalls were totally infective. As virus scanners are generally reactive, in that they don’t know what to look for until a worm or virus has already spread throughout the Internet, they were also ineffective.

So what can be done? The same rules of religiously applying updates and patches to all software running on the clients is a good start. It is worth noting that the exploits used by Nimda were known for many months, and in some cases over a year, and had been used by previous worms.

Limiting the use of file shares can also be very effective at lessening the exposure of client-to-client and client-to-server infections. It is not at all uncommon for all computers in a Local Area Network (LAN) to have write access to an organization’s file repository and/or the internal web server. While very convenient for users, it’s also equally convenient for worms.

Another security policy which would help lower the exposure of e-mail attacks is the enforced rule of not allowing e-mail attachments. This can be enacted by configuring an organization’s Mail Transport Agent (MTA) to strip away any attachments included with incoming and outgoing e-mail. The users might scream, but the network would be a much safer place.

The last option is to stop using software which has a bad history of compromises. Nimda prompted the Gartner Group to recommend organizations to look for alternatives for IIS for their web-serving needs. I would go a step further, and suggest organizations look into using alternatives to Outlook, Internet Explorer and Office as well, as they all have bad histories of exploitable weaknesses.

Eudora, while not having the scheduling functions of Outlook, has a much better security history. Netscape 6.2 and Opera 6.0 have the same functionality as IE, but without the constant stream of serious bugs. StarOffice/OpenOffice similarly has most of the functionality of MS Office, including file-format compatibility, without the exposure of macro viruses.

MS products are ubiquitous in the marketplace. Rather than being a rational for using them as well, I would argue that this should be an argument for not using them. Much like bio-diversity, there is advantage in not having the exact same infection risks as everyone else.

As mentioned in the last article, there is nothing but the restraint on the part of the worm and virus authors not to do much more damage than simply infecting and spreading themselves. Anyone who’s ever been infected by a worm or virus should think hard about how comfortable they feel relying on the good will of the authors. It’s a matter of when, not if, a truly destructive attack will take place.

Published in the Victoria Business Examiner.

Context for Computer Network Security

In my last article, I warned of the risks of computer networks being compromised. Ironically, just as the Business Examiner was hitting the street, a new worm, Nimda, started infecting Windows based computers all over the Internet.

Fortunately, Nimda did not turn out to be terribly destructive. Other than consuming human resources to repair or replace infected computers, and the networking bandwidth wasted as it propagated, not much damage was done. This was because Nimda’s author was (relatively) nice, and I think was proving a point — they could easily have left a formatted, non-functional machine.

Nimda showed by example that we need to do better. Nimda used multiple exploits, for which there have been patches available for many months. For the Gartner Group, it was the last straw — they issued an advisory recommending that enterprises look to alternatives to Microsoft’s Internet Information Server (IIS) for web-hosting.

With the explosion of the Internet over the last few years, there are now tens of millions of computers interconnected. Unfortunately, most of these machines are not appropriately secured or managed, having been thrown online without needed security issues being addressed.

While it’s impossible to communicate everything required to secure a computer network in a single article (that requires books, and much time), I hope to give a high-level overview of how the Internet works, and so provide a context for discussing network security.

The Internet uses the TCP/IP networking protocol “stack” (of layers). These are open (as in published) protocols which have effectively taken over and are replacing other networking standards, such as Novell and token ring, even in company Local Area Networks (LANs).

Each computer on the Internet has a unique IP Number, such as 139.142.246.27. Knowing this IP number, any other computer on the ‘net can send traffic to this machine, no matter how far apart they are — the Internet will automatically route the traffic between them. A machine which has a unique IP number are known as a “full peer”.

But simply sending data (in the form of Packets) between machines is just the start of the process. In order to be useful, there must be agreement between the machines on how to interpret this data. This is done by a Server offering Services to a Client on predefined Ports, using an agreed Protocol for data representation.

Two examples are Hyper Text Transport Protocol (HTTP) services being offered on port 80 for Web pages, and Simple Mail Transport Protocol (SMTP) services being offered on port 25. HTTP and SMTP, along with hundreds of others, are known as Application protocols. They sit on top of and leverage on the TCP/IP layers to move the data about.

Security issues arise when the Services being offered by a machine are mis-configured or have a bug which exposes one or more exploits. This is why keeping up to date with security patches is so critical — once an exploit is found, it is only a matter of time before Crackers and Virus/Worm authors start to use them.

Firewalls are often used to protect machines in a LAN from attacks originating from the Internet. By blocking incoming traffic, even a computer with potential exploits can be protected from use by remote attackers. Firewalls can also be used to hide entire LANs behind a single IP number, using something called Network Address Translation (NAT).

However, a firewall in not a panacea, it is simply one means of protecting a one side of a network from the other. It is quite common to “punch holes” in a firewall to expose services to the outside world, such as port 80 in order to serve web pages. If the services exposed have exploits (such as the IIS bugs the Nimda worm used as one vector to spread itself), the firewall won’t help at all.

Another important consideration is that firewalls cannot protect machines from each other if their traffic doesn’t flow through the firewall. Again, in the case of the Nimda worm, an IIS server behind a firewall on a LAN could become infected, and then spread to other machines on the same LAN. Thanks for playing.

To combat this problem, one or more firewalls can be configured to have all machines which expose services to the Internet be in a separate De-Militarized Zone (DMZ), and unable to connect to any machines in the safe LAN zone. Extreme paranoia (often a good idea in security matters) would dictate having a separate DMZ for each server, to prevent server-to-server infections.

Lastly, firewalls can be used to look for and report suspicious behavior. A common first step in an attack involves a “port scan”, with a machine walking through a series of port numbers looking for exploitable services. When the firewall sees this, it can lock out the originating machine from connecting to any port, and raise an alert.

While firewalls are powerful weapons in the fight against crackers, there is a tendency to rely on them too much. Often, machines behind the firewall do not have security patches applied to their software because it is assumed they are protected. As explained above, this is a risky posture.

Another frequent mistake made in firewall deployment is the use of a server as a firewall. This causes problems in that the firewall’s security is then dependent upon all the other services being offered being secure. If the firewall becomes compromised, all bets are off. This kind of problem is much more common than you’d think, with people installing Zone Alarm on their ADSL or cable-modem connected machines, and thinking they’re safe.

Securing a computer network is a complex endeavor, and one which is never entirely complete — there’s always the next software bug or network compromise waiting to be discovered. Security experts tend to be paranoid and nervous people, by professional requirement, and read obsessively.

While it is important not to go too far with an organizations security posture, spending more to manage the risk than is warranted, the unfortunate truth is that the trend is in the other direction, with most networks unreasonably exposed. Bringing in, temporarily or on a full-time basis, a computer security expert may save your organization an embarrassing future disaster.

Published in the Victoria Business Examiner.