CompTIA Security+ Cert Guide: OS Hardening and Virtualization
This chapter covers the following subjects:
• Hardening Operating Systems—Service packs, patches, hotfixes—This section details what you need to know to make your operating system strong as steel. Group policies, security templates, and baselining put on the finishing touches to attain that bullet-proof system.
• Virtualization Technology—This section delves into virtual machines and other virtual implementations with an eye on applying real-world virtualization scenarios.
This chapter covers the CompTIA Security+ SY0-301 objectives 3.6, 4.1, and 4.2.
Imagine a computer with a freshly installed server operating system (OS) placed on the Internet or on a DMZ that went live without any updating, service packs, or hotfixes. How long do you think it would take for this computer to be compromised? A week? Sooner? It depends on the size and popularity of the organization, but it won’t take long for a nonhardened server to be compromised. And it’s not just servers! Workstations, routers, switches: You name it; they all need to be updated regularly, or they will fall victim to attack. By updating systems frequently and by employing other methods such as group policies and baselining, we are hardening the system, making it tough enough to withstand the pounding that it will probably take from today’s technology...and society.
Another way to create a secure environment is to run OSs virtually. Virtual systems allow for a high degree of security, portability, and ease of use. However, they are resource-intensive, so a balance needs to be found, and virtualization needs to be used according to the resources of the organization. Of course, these systems need to be maintained and updated (hardened) as well.
By utilizing virtualization properly and by implementing an intelligent update plan, OSs, and the relationships between OSs, can be more secure and last a long time.
Foundation Topics
Hardening Operating Systems
An OS that has been installed out-of-the-box is inherently insecure. This can be attributed to several things, including initial code issues and backdoors, the age of the product, and the fact that most systems start off with a basic and insecure set of rules and policies. How many times have you heard of an OS where the controlling user account was easily accessible and had no password? Although these types of oversights are constantly being improved upon, making an out-of-the-box experience more pleasant, new applications and new technologies offer new security implications as well. So regardless of the product, we must try to protect it after the installation is complete.
Hardening of the OS is the act of configuring an OS securely, updating it, creating rules and policies to help govern the system in a secure manner, and removing unnecessary applications and services. This is done to minimize OS exposure to threats and to mitigate possible risk. Although it is impossible to reduce risk to zero, I’ll show some tips and tricks that can enable you to diminish current and future risk to an acceptable level.
This section demonstrates how to harden the OS through the use of service packs, patches and patch management, hotfixes, group policies, security templates, and configuration baselines. We then discuss a little bit about how to secure the file system and hard drives. But first, let’s discuss how to go about analyzing the system and deciding which applications and services are unnecessary, and then remove them.
Removing Unnecessary Applications and Services
Unnecessary applications and services use valuable hard drive space and processing power. Plus, they can be vulnerabilities to an operating system.
For example, instant messaging programs might be fun for a user but usually are not productive in the workplace (to put it nicely); plus, they often have backdoors that are easily accessible to attackers. They should be discouraged or disallowed by rules and policies. Be proactive when it comes to these types of programs. If users can’t install an IM program on their computer, you will never have to go about removing it from the system. But if you do have to remove an application like this, be sure to remove all traces that it ever existed. Make sure that related services are turned off and disabled. Then verify that their inbound ports are no longer functional, and that they are closed and secured. For example, AOL Instant Messenger uses inbound port 5190, which is well known to attackers, as are other inbound ports of other IM programs, such as ICQ or Trillian. Confirm that any shares created by an application are disabled as well. Basically, remove all instances of the application or, if necessary, re-image the computer! That is just one example of many, but it can be applied to most superfluous programs. Another type of program you should watch out for are remote control programs. Applications that enable remote control of a computer should be avoided if possible.
Personally, I use a lot of programs. But over time, some of them fall by the wayside and are replaced by better programs. The best procedure is to check a system periodically for any unnecessary programs. For example, in Windows 7 we can look at the list of installed programs by going to the Control Panel > Programs > Programs and Features, as shown in Figure 3-1.
Figure 3-1. Windows 7 Programs and Features Window
Notice in the figure that Camtasia Studio 5 is installed. If in the future I decide to use another program, such as Adobe Captivate or something similar, and Camtasia is no longer necessary, it should be removed. This can be done by right-clicking the application and selecting Uninstall. Or an application might have an uninstall feature built in to the Start menu that you can use. Camtasia takes up 61 MB, so it makes sense to remove apps like this to conserve hard drive space. This becomes more important when you deal with audio/video departments that would use an application (and many others like it) such as Camtasia. They are always battling for hard drive space, and it can get ugly! Not only that, but many applications place a piece of themselves in the system tray. So, a part of the program is actually running behind the scenes using processor/RAM resources. If the application is necessary, there are often ways to eliminate it from the system tray, either by right-clicking the system tray icon and accessing its properties, or by turning it off with a configuration program such as MSconfig.
Consider also that apps like this might also attempt to communicate with the Internet in an attempt to download updates, or for other reasons. It makes this issue not only a resource problem, but also a security concern, so it should be removed if it is unused. Only software deemed necessary should be installed in the future.
Services are used by applications and the OS. They too can be a burden on system resources and pose security concerns. Examine Figure 3-2 and note the highlighted service.
Figure 3-2. Services Window in Windows XP
The OS shown in Figure 3-2 is Windows XP. Windows XP was the last Microsoft OS to have Telnet installed by default, even though it was already well-known that Telnet was a security risk. This is an example of an out-of-box security risk. But to make matters worse, the Telnet service in the figure is started! Instead of using Telnet, a more secure application/protocol should be utilized such as SSH. Then Telnet should be stopped and disabled. To do so, just right-click the service, select Properties, then click the Stop button, and change the Startup type drop-down menu to the Disabled option, as shown in Figure 3-3. This should be done for all unnecessary services, for example, the Trivial File Transfer Protocol (TFTP). By disabling services such as this one we can reduce the risk of attacker access to the computer and we trim the amount of resources used. This is especially important on Windows servers, because they run a lot more services and are a more common target. By disabling unnecessary services, we reduce the size of the attack surface. Services can be disabled in the Windows Command Prompt by using the sc config command, and can be started and stopped with the net start and net stop commands, respectively.
Figure 3-3. Telnet Properties Dialog Box
Services can be stopped in the Linux command-line in a few ways:
By typing the following syntax:
/etc/init.d/<service> stop
where <service> is the service name.
- By typing the following syntax in select versions:
service <service> stop
Some services require a different set of syntax. For example, Telnet can be deactivated in Red Hat by typing chkconfig telnet off. Check the MAN pages within the command line or online for your particular version of Linux to obtain exact syntax and any previous commands that need to be issued. Or use a generic Linux online MAN page, for example: http://linux.die.net/man/1/telnet.
In Mac OS X, services can be stopped in the command line by using the following syntax:
% sudo /sbin/service <service> stop
Don’t confuse services with service packs. Although a service controls a specific function of an OS or application, a service pack is used to update a system. The service pack probably will update services as well, but the similarity in names is purely coincidental.
Service Packs
A service pack (SP) is a group of updates, bug fixes, updated drivers, and security fixes installed from one downloadable package or from one disc. When the number of patches for an OS reaches a certain limit, they are gathered together into an SP. This might take one to several months after the OS is released. Because organizations know an SP will follow an OS release, which implies that there will be security issues with a brand new out-of-the-box OS, they will usually wait until the first SP is released before embracing a new OS.
SPs are numbered; for example SP1, SP2, and so on. An OS without an SP is referred to as SP0. Installing an SP is relatively easy and only asks a few basic questions. When those questions are answered, it takes several minutes or more to complete the update; then a restart is required. Although the SP is installed, it rewrites many files and copies new ones to the hard drive as well.
Historically, many SPs have been cumulative, meaning that they also contain previous SPs. For example, SP2 for Windows XP includes all the updates from SP1; a Windows XP installation with no SP installed can be updated directly to SP2 without having to install SP1 first. However, you also see incremental SPs, for example, Windows XP SP3. A Windows XP installation with no SP cannot be updated directly to SP3; it needs to have SP1 or SP2 installed first before the SP3 update. Another example of an incremental SP is Windows Vista SP2; SP1 must be installed before updating to SP2 in Windows Vista. This is becoming more common with Microsoft software. Before installing an SP, read the instructions that accompany it, or the instructions on the download page on the company’s website.
To find out an OS’s current SP level, click Start, right-click Computer, and select Properties, and the SP should be listed. If there is no SP installed, it will be blank. An example of Windows 7’s System window is shown in Figure 3-4; it shows that SP1 is installed. An example of Windows XP’s System Properties dialog box is shown in Figure 3-5; it has no SP installed (SP0). If an SP were installed, the SP number would be displayed under Version 2002; otherwise the area is left blank. Windows Server OSs work in the same fashion.
Figure 3-4. Windows 7 System Window
Figure 3-5. Windows XP System Properties Dialog Box
To find out what SP a particular version of Office is running, click Help on the menu bar and select About Microsoft Office <Application Name> where the application name could be Outlook, Word, and so on, depending on what app you use. An example of this in Outlook is shown in Figure 3-6. Office SPs affect all the applications within the Office suite.
Figure 3-6. Microsoft Outlook About Window
SPs can be acquired through Windows Update, at www.microsoft.com, on CD/DVD, and through a Microsoft Developer Network (MSDN) subscription. An SP might also have been incorporated into the original OS distribution DVD/CD. This is known as slipstreaming. This method enables the user to install the OS and the SP at the same time in a seamless manner. System administrators can create slip-streamed images for simplified over-the-network installations of the OS and SP.
Table 3-1 defines the latest SPs as of August 2011. You might see older OSs in the field. (If something works, why replace it, right?) For example, Windows NT and 2000 servers might be happily churning out the data necessary to users. That’s okay; just make sure that they use the latest SP so that they can interact properly with other computers on the network. Keep in mind that this table is subject to change because new SPs can be released at any time. Note that other applications such as Microsoft Office, and server-based apps such as Microsoft Exchange Server, use SPs as well.
Table 3-1. Latest Microsoft SPs as of August 2011
Operating System |
Service Pack |
Windows 7 |
SP1 |
Windows Vista |
SP2 |
Windows XP |
SP3 |
Windows Server 2008 |
SP1 |
Windows Server 2003 |
SP2 |
Windows 2000 (Server and Professional) |
SP4 |
Windows NT 4.0 (Server and Workstation) |
SP6 |
Office 201 |
SP1 |
Office 2007 |
SP2 |
Office 2003 |
SP3 |
Office 2000 |
SP3 |
If possible, service pack installations should be done offline. Disconnect the computer from the network by disabling the network adapter before initiating the SP upgrade. Again, because brand new OSs are inherently insecure to some extent (no matter what a manufacturer might say), organizations usually wait for the release of the first SP before implementing the OS on a live network. However, SPs are not the only type of updating you need to do to your computers. Microsoft OSs require further patching with the Windows Update program, and other applications require their own patches and hotfixes.
Windows Update, Patches, and Hotfixes
OSs should be updated regularly. For example, Microsoft recognizes the deficiencies in an OS, and possible exploits that could occur, and releases patches to increase OS performance and protect the system. After the latest SP has been installed, the next step is to see whether any additional updates are available for download.
For example, if you want to install additional updates for Windows through Windows Update, you can do the following:
Step 1. Click Start > All Programs > Windows Update.
Step 2. Different OSs have different results at this point. For example, Windows 7/Vista opens the Window Update window in which you can click the Install Updates button. Windows XP opens a web page in which you can select Express or Custom installation of updates. Follow the prompts to install the latest version of the Windows Update software if necessary.
Step 3. The system (or web page) automatically scans for updates. Updates are divided into the following categories:
- Critical updates and SPs—These include the latest SP and other security and stability updates. Some updates must be installed individually; others can be installed as a group.
- Windows updates—Recommended updates to fix noncritical problems certain users might encounter; also adds features and updates to features bundled into Windows.
- Driver updates—Updated device drivers for installed hardware.
If your system is in need of updates, a shield (for the Windows Security Center) appears in the system tray. Double-clicking this brings up the Security Center window in which you can turn on automatic updates. To modify how you are alerted to updates, and how they are downloaded and installed, do the following in Windows 7/Vista:
- Click Start > All Programs > Windows Update; then click the Change Settings link.
- It might require slightly different navigation in other OSs to access this.
From here, there will be four options (in other OSs, the options might be slightly different):
- Install Updates Automatically—This is the recommended option by Microsoft. You can schedule when and how often the updates should be downloaded and installed.
- Download Updates but Let Me Choose Whether to Install Them—This automatically downloads updates when they become available, but Windows prompts you to install them instead of installing them automatically. Each update has a checkbox, so you can select individual updates to install.
- Check for Updates but Let Me Choose Whether to Download and Install Them—This enables you know when updates are available, but you are in control as to when they are downloaded and installed.
- Never Check for Updates—This is not recommended by Microsoft because it can be a security risk but might be necessary in some environments in which updates could cause conflicts over the network. In some networks, the administrator takes care of updates from a server and sets the local computers to this option.
Another tool that can be used online is Microsoft Update, which is similar to Windows Update, but it can update for other Microsoft applications as well. It can be found at the following link: http://windowsupdate.microsoft.com/. For newer versions of Windows, this simply opens the Windows Update program on your local computer automatically.
Patches and Hotfixes
The best place to obtain patches and hotfixes is from the manufacturer’s website. The terms patches and hotfixes are often used interchangeably. Windows Updates are made up of hotfixes. Originally, a hotfix was defined as a single problem-fixing patch to an individual OS or application installed live while the system was up and running and without a reboot necessary. However, this term has changed over time and varies from vendor to vendor. (Vendors may even use both terms to describe the same thing.) For example, if you run the systeminfo command in the Command Prompt of a Windows Vista computer, you see a list of Hotfix(s), similar to Figure 3-7. The figure doesn’t show all of them because there are 88 in total. However, they can be identified with the letters KB followed by six numbers. Some of these are single patches to individual applications, but others affect the entire system, such as #88, which is called KB948465. This hotfix is actually Windows Vista Service Pack 2!—which includes program compatibility changes, additional hardware support, and general OS updates. And a Service Pack 2 installation definitely requires a restart.
Figure 3-7. systeminfo Command in Windows Vista
On the other side of the spectrum, World of Warcraft defines hotfixes as a “hot” change to the server with no downtime (or a quick world restart), and no client download is necessary. The organization releases these if they are critical, instead of waiting for a full patch version. The gaming world commonly uses the terms patch version, point release, or maintenance release to describe a group of file updates to a particular gaming version. For example, a game might start at version 1 and later release an update known as 1.17. The .17 is the point release. (This could be any number depending on the amount of code rewrites.) Later, the game might release 1.32, in which .32 is the point release, again otherwise referred to as the patch version. This is common with other programs as well. For example, the aforementioned Camtasia program that is running on the computer we showed is version 5.0.2. The second dot (.2) represents very small changes to the program, whereas a patch version called 5.1 would be a larger change, and 6.0 would be a completely new version of the software. This concept also applies to blogging applications and forums (otherwise known as bulletin boards). As new threats are discovered (and they are extremely common in the blogging world), new patch versions are released. They should be downloaded by the administrator, tested, and installed without delay. Admins should keep in touch with their software manufacturers, either through phone or e-mail, or by frequenting their web pages. This keeps the admin “in the know” when it comes to the latest updates. And this applies to server and client operating systems, server add-ons such as Microsoft Exchange or SQL Server, Office programs, web browsers, and the plethora of third-party programs that an organization might use. Your job just got a bit busier!
Of course, we are usually not concerned with updating games in the working world; they should be removed from a computer if they are found (unless perhaps if you work for a gaming company). But multimedia software such as Camtasia is prevalent in most companies, and web-based software such as bulletin-board systems are also common and susceptible to attack.
Patches generally carry the connotation of a small fix in the mind of the user or system administrator, so larger patches are often referred to as software updates, service packs, or something similar. However, if you were asked to fix a single security issue on a computer, a patch would be the solution you would want.
Sometimes, patches are designed poorly, and although they might fix one problem, they could possibly create another, which is a form of software regression. Because you never know exactly what a patch to a system might do, or how it might react or interact with other systems, it is wise to incorporate patch management.
Patch Management
It is not wise to go running around the network randomly updating computers, not to say that you would do so! Patching, like any other process, should be managed properly. Patch management is the planning, testing, implementing, and auditing of patches. Now, these four steps are ones that I use; other companies might have a slightly different patch management strategy, but each of the four concepts should be included:
- Planning—Before actually doing anything, a plan should be set into motion. The first thing that needs to be decided is whether the patch is necessary and whether it is compatible with other systems. Microsoft Baseline Security Analyzer (MBSA) is one example of a program that can identify security misconfigurations on the computers in your network, letting you know whether patching is needed. If the patch is deemed necessary, the plan should consist of a way to test the patch in a “clean” network on clean systems, how and when the patch will be implemented, and how the patch will be checked after it is installed.
- Testing—Before automating the deployment of a patch among a thousand computers, it makes sense to test it on a single system or small group of systems first. These systems should be reserved for testing purposes only and should not be used by “civilians” or regular users on the network. I know, this is asking a lot, especially given the amount of resources some companies have. But the more you can push for at least a single testing system that is not a part of the main network, the less you will be to blame if a failure occurs!
- Implementing—If the test is successful, the patch should be deployed to all the necessary systems. In many cases this is done in the evening or over the weekend for larger updates. Patches can be deployed automatically using software such as Microsoft’s Systems Management Server (SMS).
- Auditing—When the implementation is complete, the systems (or at least a sample of systems) should be audited; first, to make sure the patch has taken hold properly, and second, to check for any changes or failures due to the patch. SMS, and other third-party tools can be used in this endeavor.
There are also Linux-based and Mac-based programs and services developed to help manage patching and the auditing of patches. Red Hat has services to help sys admins with all the RPMs they need to download and install, which can become a mountain of work quickly! And for those people who run GPL Linux, there are third-party services as well. Sometimes, patch management is just too much for one person, or for an entire IT department, and an organization might opt to contract that work out.
Group Policies, Security Templates, and Configuration Baselines
Although they are important, removing applications, disabling services, patching, hotfixing, and installing service packs are not the only ways to harden an operating system. Administrative privileges should be used sparingly, and policies should be in place to enforce your organization’s rules. Group policies are used in Microsoft environments to govern user and computer accounts through a set of rules. Built-in or administrator-designed security templates can be applied to these to configure many rules at one time. And configuration baselines should be created and used to measure server and network activity.
To access the group policy in Windows, go to the Run prompt and type gpedit.msc. This should display the Local Group Policy Editor console window. Figure 3-8 shows an example of this in Windows 7.
Although there are many configuration changes you can make, this figure focuses on the computer’s security settings that can be accessed by navigating to Local Computer Policy > Computer Configuration > Windows Settings > Security Settings. From here you can make changes to the password policies, for example, how long a password lasts before having to be changed, account lockout policies, public key policies, and so on. We talk about these different types of policies and the best way to apply them in future chapters. The group policy editor in the figure is known as the Local Group Policy and only governs that particular machine and the local users of that machine. It is a basic version of the group policy used by Windows Server 2008/2003 domain controllers that have Active Directory loaded.
Figure 3-8. Local Group Policy Editor in Windows 7
It is also from here where you can add security templates as well. Security templates are groups of policies that can be loaded in one procedure; they are commonly used in corporate environments. Different security templates have different security levels. These can be installed by right-clicking Security Settings and selecting Import Policy. This brings up the Import Policy From window. Figure 3-9 shows an example of this in Windows Server 2003. For example, the file securedc.inf is an information file filled with policy configurations more secure than the default you would find in a Windows Server 2003 domain controller that runs Active Directory. And hisecdc.inf is even more secure, perhaps too secure and limiting for some organizations. Generally, these policy templates are applied to organizational units on a domain controller. But they can be used for other types of systems and policies as well. Server 2003 Templates are generally stored in %systemroot%\Security\templates.
There are only three default security templates in Server 2008: Defltbase.inf (uncommon), defltsv.inf (used on regular servers), and defltdc.inf (used in domain controllers). By default, these templates are stored in %systemroot%\inf. They are imported in the same manner as in Server 2003.
Figure 3-9. Import Policy From Window in Windows Server 2003
Baselining is the process of measuring changes in networking, hardware, software, and so on. Creating a baseline consists of selecting something to measure and measuring it consistently for a period of time. For example, I might want to know what the average hourly data transfer is to and from a server. There are many ways to measure this, but I could possibly use a protocol analyzer to find out how many packets cross through the server’s network adapter. This could be run for 1 hour (during business hours of course) every day for 2 weeks. Selecting different hours for each day would add more randomness to the final results. By averaging the results together, we get a baseline. Then we can compare future measurements of the server to the baseline. This can help us to define what the standard load of our server is and the requirements our server needs on a consistent basis. It can also help when installing additional, like computers on the network. The term baselining is most often used to refer to monitoring network performance, but it actually can be used to describe just about any type of performance monitoring. Baselining and benchmarking are extremely important when testing equipment and when monitoring already installed devices. We discuss this further in Chapter 11, “Monitoring and Auditing.”
Hardening File Systems and Hard Drives
Last topic about hardening your system, I promise! Not! The rest of the book constantly refers to more advanced and in-depth ways to harden a computer system. But for this chapter, let’s conclude this section by giving a few tips on hardening a hard drive and the file system it houses.
First, the file system used dictates a certain level of security. On Microsoft computers, the best option is to use NTFS, which is more secure, enables logging (oh so important), supports encryption, and has support for a much larger maximum partition size and larger file sizes. Just about the only place where FAT32 and NTFS are on a level playing field is that they support the same amount of file formats. So, by far, NTFS is the best option. If a volume uses FAT or FAT32, it can be converted to NTFS using the following command:
Convert volume /FS:NTFS
For example, if I want to convert a USB flash drive named M: to NTFS the syntax would be
Convert M: /FS:NTFS
There are additional options for the convert command. To see these, simply type convert /? in the Command Prompt. NTFS enables for file-level security and tracks permissions within access control lists (ACLs) that are a necessity in today’s environment. Most systems today already use NTFS, but you never know about flash-based and other removable media. A quick chkdsk command in the Command Prompt or right-clicking the drive in the GUI and selecting Properties can tell you what type of file system it runs.
System files and folders by default are hidden from view to protect a Windows system, but you never know. To permanently configure the system to not show hidden files and folders, navigate to Windows Explorer, click the Tools menu, and click Folder Options. Then select the View tab, and under Hidden Files and Folders select the Do not show hidden files and folders radio button. Note that in Windows 7/Vista, the menu bar can also be hidden; to view it press Alt+T on the keyboard. To configure the system to hide protected system files, select the Hide protected operating system files checkbox, located three lines below the radio button previously mentioned. This disables the ability to view files such as ntldr, boot.ini, or bootmgr. You might also need to secure a system by turning off file sharing. For example, this can be done in Windows 7/Vista within the Network and Sharing Center, and within Windows XP in the Local Area Connection Properties dialog box.
In the past, I have made a bold statement: “Hard disks will fail.” But it’s all too true. It’s not a matter of if; it’s a matter of when. By maintaining and hardening the hard disk with various hard disk utilities, we attempt to stave off that dark day as long as possible. You can implement several things when maintaining and hardening a hard disk:
- Remove temporary files—Temporary files and older files can clog up a hard disk, cause a decrease in performance, and pose a security threat. It is recommended that Disk Cleanup or a similar program be used. Policies can be configured (or written) to run Disk Cleanup every day or at logoff for all the computers on the network.
Periodically check system files—Every once in a while it’s a good idea to verify the integrity of operating system files. This can be done in the following ways:
- With the chkdsk command in Windows. This checks the disk and fixes basic issues such as lost files, and some errors with the /F option.
- With the SFC (System File Checker) command in Windows. This utility checks and if necessary replaces protected system files. It can be used to fix problems in the OS, and in other applications such as Internet Explorer. A typical command is SFC /scannow. Use this if chkdsk is not successful at making repairs.
- With the fsck command in Linux. This command is used to check and repair a Linux file system. The synopsis of the syntax is fsck [ -sAVRTNP ] [ -C [ fd ] ] [ -t fstype ] [filesys ... ] [--] [ fs-specific-options ]. More information about this command can be found at the corresponding MAN page: http://linux.die.net/man/8/fsck. A derivative, e2fsck, is used to check a Linux ext2fs (second extended file system). Also open source data integrity tools can be downloaded for Linux such as Tripwire.
- Defragment drives—Applications and files on hard drives become fragmented over time. For a server, this could be a disaster, because the server cannot serve requests in a timely fashion if the drive is too thoroughly fragmented. Defragmenting the drive can be done with Microsoft’s Disk Defragmenter, with the command-line defrag command, or with other third-party programs.
- Back up data—Backing up data is critical for a company. It is not enough to rely on a fault tolerant array. Individual files or the entire system can be backed up to another set of hard disks, to optical discs, or to tape. Microsoft domain controllers’ Active Directory databases are particularly susceptible to attack; the System State for these OSs should be backed up, in the case that the server fails and the Active Directory needs to be recovered in the future.
- Create restore points—Restore points should also be created on a regular basis for servers and workstations. System Restore can fix issues caused by defective hardware or software by reverting back to an earlier time. Registry changes made by hardware or software are reversed in an attempt to force the computer to work the way it did previously. Restore points can be created manually and are also created automatically by the OS before new applications or hardware is installed.
- Consider whole disk encryption—Finally, whole disk encryption can be used to secure the contents of the drive, making it harder for attackers to obtain and interpret its contents.
A recommendation I give to all my students and readers is to separate the OS from the data physically. If you can have each on a separate hard drive, it can make things a bit easier just in case the OS is infected with malware. The hard drive that the OS inhabits can be completely wiped and reinstalled without worrying about data loss, and applications can always be reloaded. Of course, settings should be backed up (or stored on the second drive). If a second drive isn’t available, consider configuring the one hard drive as two partitions, one for the OS (or system) and one for the data. By doing this, and keeping a well-maintained computer, you are effectively hardening the OS.
Virtualization Technology
Let’s define virtualization. Virtualization is the creation of a virtual entity, as opposed to a true or actual entity. The most common type of entity created through virtualization is the virtual machine—usually as an OS. In this section we discuss types of virtualizations, their purposes, and define some of the various virtual applications.
Types of Virtualization and Their Purposes
Many types of virtualization exist, from network and storage to hardware and software. The CompTIA Security+ exam focuses mostly on virtual machine software. The virtual machines (VMs) created by this software run operating systems or individual applications. These virtual OSs (also known as hosted OSs or guests) are designed to run inside a real OS. So the beauty behind this is that you can run multiple various OSs simultaneously from just one PC. This has great advantages for programmers, developers, and systems administrators, and can facilitate a great testing environment. Security researchers in particular utilize virtual machines so they can execute and test malware without risk to an actual OS and the hardware it resides on. Nowadays, many VMs are also used in live production environments. Plus, an entire OS can be dropped onto a DVD or even a flash drive and transported where you want to go.
Of course, there are drawbacks. Processor and RAM resources and hard drive space are eaten up by virtual machines. And hardware compatibility can pose some problems as well. Also, if the physical computer that houses the virtual OS fails, the virtual OS will go offline immediately. All other virtual computers that run on that physical system will also go offline. There is added administration as well. Some technicians forget that virtual machines need to be updated with the latest service packs and patches just like regular OSs. Many organizations have policies that define standardized virtual images, especially for servers. As I alluded to earlier, the main benefit of having a standardized server image is that mandated security configurations will have been made to the OS from the beginning—creating a template so to speak. This includes a defined set of security updates, service packs, patches, and so on, as dictated by organizational policy. So when you load up a new instance of the image, a lot of the configuration work will already have been done, and just the latest updates to the OS and AV software need to be applied. This image can be used in a virtual environment, or copied to a physical hard drive as well. For example, you might have a server farm that includes two physical Windows Server 2008 systems, and four virtual Windows Server 2008 systems, each running different tasks. It stands to reason that you will be working with new images from time to time as you need to replace servers or add them. By creating a standardized image once, and using it many times afterward, you can save yourself a lot of configuration time in the long run.
Virtual machines can be broken down into two categories:
- System virtual machine—A complete platform meant to take the place of an entire computer that enables you to run an entire OS virtually.
- Process virtual machine—Designed to run a single application, for example, if you ran a virtual web browser.
Whichever VM you select, the VM cannot cross the software boundaries set in place. For example, a virus might infect a computer when executed and spread to other files in the OS. However, a virus executed in a VM will spread through the VM but not affect the underlying actual OS. So this provides a secure platform to run tests, analyze malware, and so on...and creates an isolated system. If there are adverse effects to the VM, those effects (and the VM) can be compartmentalized to stop the spread of those effects. This is all because the virtual machine inhabits a separate area of the hard drive from the actual OS. This enables us to isolate network services and roles that a virtual server might play on the network.
Virtual machines are, for all intents and purposes, emulators. The terms emulation, simulation, and virtualization are often used interchangeably.
Emulators can also be web-based. An example of a web-based emulator is D-Link’s DIR-655 router emulator (we use this in Chapters 5–7), which you can find at the following link:
http://support.dlink.com/emulators/dir655/133NA/login.html
You might remember older emulators such as Basilisk, or the DOSBox, but nowadays, anything that runs an OS virtually is generally referred to as a virtual machine or virtual appliance.
A virtual appliance is a virtual machine image designed to run on virtualization platforms; it can refer to an entire OS image or an individual application image. Generally, companies such as VMware refer to the images as virtual appliances, and companies such as Microsoft refer to images as virtual machines. One example of a virtual appliance that runs a single app is a virtual browser. VMware developed a virtual browser appliance that protects the underlying OS from malware installations from malicious websites. If the website succeeds in its attempt to install the malware to the virtual browser, the browser can be deleted and either a new one can be created or an older saved version of the virtual browser can be brought online!
Other examples of virtualization include the virtual private network (VPN), which is covered in Chapter 8, “Physical Security and Authentication Models,” and the virtual local area network (VLAN), which is covered in Chapter 5, “Network Design Elements and Network Threats.”
Working with Virtual Machines
Several companies offer virtual software including Microsoft and VMware. Let’s take a look at some of those programs now.
Microsoft Virtual PC
Microsoft’s Virtual PC is commonly used to host workstation OSs, server OSs, and sometimes other OSs such as DOS or even Linux. There are 32-bit and 64-bit versions that can be downloaded for free and run on most Windows systems. A common version is Virtual PC 2007 that can be downloaded from the following link:
After a quick installation, running the program displays the Virtual PC Console window, as shown in Figure 3-10.
Figure 3-10. Virtual PC Console
After a fresh install of Virtual PC, there won’t be any virtual machines listed. However, in Figure 3-10, you can note a Windows Server 2003 VM, a SuSE Linux 9 VM, and a Windows Vista VM. Virtual software such as this allows a person to run less used or older operating systems without the need for additional physical hardware. Personally, I run all kinds of platforms with Virtual PC, but it is not the only virtual software I use.
A virtual machine can be created by clicking the New button and following the directions. The virtual machine consists of two parts when you are done:
- Virtual machine configuration file or .vmc
- Virtual hard drive file or .vhd
In addition to this, you can save the state of the virtual machine. Let’s say you need to restart your main computer but don’t want to restart the virtual machine. You could simply “save the state” of the VM that will save it, remember all the files that were open and where you were last working, and close the VM. Even after rebooting the actual PC, you can immediately reload the last place you were working in a VM. When a VM’s state is saved, an additional file called a .vsv file is stored adjacent to the .vhd. Figure 3-11 shows an example of a Windows Server 2003 virtual machine, which is called “Server2003” as shown at the top of the Virtual PC software window in the title bar.
See Lab 3-2 in the “Hands-On Labs” section near the end of this chapter for a quick tutorial/lab on using Virtual PC to create a virtual machine.
Figure 3-11. Windows Server 2003 Virtual Machine
Microsoft Windows XP Mode
Windows 7 can emulate the entire Windows XP OS if you so want. To do so, you must install Windows XP Mode, then Virtual PC, and then the Windows XP Mode update. This is done to help with program compatibility. These components can be downloaded for free (as long as you have a valid copy of Windows 7) from the following link: www.microsoft.com/windows/virtual-pc/download.aspx.
Microsoft Virtual Server
Virtual Server is similar to Virtual PC but far more powerful and meant for running server OSs in particular. It is not free like Virtual PC, and an install of Internet Information Services (IIS) is required prior to the install of Virtual Server to take full advantage of the program. When servers are created, they can be connected to by using the Virtual Machine Remote Control (VMRC) client, as shown in Figure 3-12.
Figure 3-12. Virtual Machine Remote Client in Virtual Server 2005
VMware
VMware (part of EMC Corporation) runs on Windows, Linux, and Mac OSs. Some versions of VMware (for example VMware ESX Server) can run on server hardware without any underlying OS. These programs are extremely powerful, may require a lot of resources, and are generally web-based, meaning that you would control the virtual appliance through a browser. An example of the VCenter server main management console window in VMware is shown in Figure 3-13.
Figure 3-13. VCenter Management Console Window
Hypervisor
Most virtual machine software is designed specifically to host more than one VM. A byproduct is the intention that all VMs are able to communicate with each other quickly and efficiently. This concept is summed up by the term hypervisor. A hypervisor allows multiple virtual operating systems (guests) to run at the same time on a single computer. It is also known as a virtual machine manager (VMM). The term hypervisor is often used ambiguously. This is due to confusion concerning the two different types of hypervisors:
- Type 1: Native—The hypervisor runs directly on the host computer’s hardware. Because of this it is also known as “bare metal.” Examples of this include VMware ESX Server, Citrix XenServer, and Microsoft Hyper-V. Hyper-V can be installed as a standalone product known as Microsoft Hyper-V Server 2008, or it can be installed as a role within a standard installation of Windows Server 2008 (R2). Either way, the hypervisor runs independently and accesses hardware directly, making both versions of Hyper-V Type 1 hypervisors.
- Type 2: Hosted—This means that the hypervisor runs within (or “on top of”) the operating system. Guest operating systems run within the hypervisor. Compared to Type 1, guests are one level removed from the hardware and therefore run less efficiently. Examples of this include Microsoft Virtual PC, VMware Server, and VMware Workstation.
Generally, Type 1 is a much faster and efficient solution than Type 2. Because of this, Type 1 hypervisors are the kind used by web-hosting companies and by companies that offer cloud computing solutions such as infrastructure as a service (IaaS). It makes sense too. If you have ever run a powerful operating system such as Windows Server 2008 within a Type 2 hypervisor such as Virtual PC 2007, you will have noticed that a ton of resources are being used that are taken from the hosting operating system. It is not nearly as efficient as running the hosted OS within a Type 1 environment. However, keep in mind that the hardware/software requirements for a Type 1 hypervisor are more stringent and more costly. Because of this, some developing and testing environments use Type 2-based virtual software.
Securing Virtual Machines
In general, the security of a virtual machine operating system is the equivalent to that of a physical machine OS. The VM should be updated to the latest service pack, should have the newest AV definitions, perhaps have a personal firewall, have strong passwords, and so on. However, there are several things to watch out for that, if not addressed, could cause all your work compartmentalizing OSs to go down the drain. This includes considerations for the virtual machine OS as well as the controlling virtual machine software.
First, make sure you are using current and updated virtual machine software. Update to the latest patch or service pack for the software you are using (for example, Virtual PC 2007 SP1). Configure any security settings or options in the virtual machine software. Once this is done, you can go ahead and create your virtual machines, keeping in mind the concept of standardized imaging mentioned earlier.
Next, keep an eye out for network shares and other connections between the virtual machine and the physical machine, or between two VMs. Normally, malicious software cannot travel between a VM and another VM or a physical machine as long as they are properly separated. But if active network shares are between the two, malware could easily spread from one system to the other. If a network share is needed, map it, use it, and then disconnect it when you are finished. If you need network shares between two VMs, document what they are and which systems (and users) connect to them. Review the shares often too see whether they are still necessary. If a virtual host is attached to a NAS device or to a SAN, it is recommended to segment the storage devices off the LAN either physically, or with a secure VLAN. Regardless of where the virtual host is located, secure it with a strong firewall and disallow unprotected file transfer protocols such as FTP and Telnet.
Consider disabling any unnecessary hardware from within the virtual machine such as optical drives, USB ports, and so on. If some type of removable media is necessary, enable the device, make use of it, and then disable it immediately after finishing. Also, devices can be disabled from the virtual machine software itself. The boot priority in the virtual BIOS should also be configured so that the hard drive is booted from first, and not any removable media or network connection (unless necessary in your environment).
Due to the fact that VMs use a lot of physical resources of the computer, a compromised VM can be a threat in the form of a Denial of Service attack. To mitigate this, set a limit on the amount of resources any particular VM can utilize, and periodically monitor the usage of VMs. However, be careful of monitoring VMs. Most virtual software offers the ability to monitor the various VMs from the main host. However, this feature can also be exploited. Be sure to limit monitoring, enable it only for authorized users, and disable it whenever not necessary.
Finally, be sure to protect the raw virtual disk file. A disaster on the raw virtual disk can be tantamount to physical disk disaster. Look into setting permissions as to who can access the folder where the VM files are stored. If your virtual machine software supports logging and/or auditing, consider implementing it so that you can see exactly who started and stopped the virtual machine, and when. Otherwise, you can audit the folder where the VM files are located. Finally, consider digitally signing the VM and validating that signature prior to usage.
One last comment: A VM should be as secure as possible, but in general, because the hosting computer is in a controlling position, it is likely more easily exploited, and a compromise to the hosting computer probably means a compromise to any guest OSs. Therefore, if possible, the host should be even more secure than the VMs it controls. So harden your heart, harden the VM, and make the hosting OS solid as a rock.