- Domain 3: Network Implementation
- What You Will Need
- Lab 1: Active Directory Structure and Permissions
- Lab 2: Services and nbtstat
- Lab 3: Wiring, Part II
- Lab 4: VPN and Authentication Re-visited
- Lab 5: Firewalls, Proxies, and Ports
- Lab 6: Anti-Virus Software
- Lab 7: Fault Tolerance
- Lab 8: Disaster Recovery
- Domain 3 Practice Questions
- Answers and Explanations
Lab 7: Fault Tolerance
Orientation
In this lab you will accomplish the following:
Set up a dynamic disk.
Create strategies for redundant power, links, and for clustering.
Prepare for the Network+ subdomain 3.11.
Procedure
Set up a dynamic disk.
Go to PC1.
Right-click My Computer and select Manage.
In the Computer Management window, select Disk Management. You should see a screen like Figure 3.50.
Right-click your main hard drive (it should be named Disk 0) and choose Upgrade to Dynamic Disk.
In the next window, click OK.
In the pop-up window that appears, click Upgrade.
Click Yes in the next window to continue.
Click OK to start the upgrade.
You must reboot for the upgrade to complete. Restart and log back on.
The system will have to restart again. Restart a second time and log back on.
Open the Computer Management window and click Disk Management.
Take note that the disk is not a basic disk anymore; it is a dynamic disk. Also notice that the partitions have now become volumes. This is shown in Figure 3.51.
Figure 3.50 Disk management.
Figure 3.51 A dynamic volume.
The idea behind dynamic disk technology is that you can resize the volumes at will, thus the name dynamic. No longer are you stuck with static-sized partitions. In addition you can add disks to create spanned volumes, striped volumes, and RAID sets. The drawback is that this technology can cause errors and/or corruption to your operating system. Let’s go over the differences between the various disk sets:
Spanned volume. One drive letter (example C:) can span across multiple disks. When one disk fills up, the data starts getting stored on the next disk.
Striped volume. One drive letter again deals with multiple disks, but when you write data, the files are dispersed among the drives one at a time, back and forth.
Redundant Array of Inexpensive Disks (RAID) 0. This uses two disks minimum; it can be more. The data is striped across the disks. This is basically a striped volume.
So far the technologies we have mentioned are not fault tolerant. Let’s talk about two that are:
RAID 1: Mirroring. This uses two disks and two disks only to create a mirror. Data is written simultaneously to both disks so there is an entire copy of all information. If you lose one disk, the other takes over automatically! Writing is slower than conventional drive writing, but reading is the same because it only reads from the first disk in the mirror.
RAID 5: striping with parity. Like RAID 0, the data is striped among multiple disks. The differences are that you need three disks minimum and that parity information is also striped across all disks. If one disk fails, it can be re-created from the parity data on the other remaining disks.
Clustering
There are two types of clustering, both of which normally need some kind of advanced server software. The following describe these two types:
Fail-over clustering. This is when you have two like computers. The first takes care of all data transfers, but if it fails, all transmissions are sent over to the second computer until you can get the first back up and running.
True clustering. This is when two or more computers (or server blades) work in unison. All processing, RAM, and HDD resources are used equally among the machines, thus making a type of super com-puter.
You want to have redundancy whenever possible. This means two power supplies, multiple processors, two NICs, and so on. Remember that the rule is hardware will fail—it’s not a matter of if, it’s a matter of when.
What Did I Just Learn?
In this lab you learned how to do the following:
Set up a dynamic disk.
Create strategies for redundant power, links, and for clustering.
Prepared for the Network+ subdomain 3.11.