- Section 1: Server Security Fundamentals
- Section 2: Intermediate Server-Hardening Techniques
- Section 3: Advanced Server-Hardening Techniques
- Summary
Section 2: Intermediate Server-Hardening Techniques
Intermediate server-hardening techniques have to do with SSH key authentication, AppArmor, and remote logging.
SSH Key Authentication
Most administrators access their machines over SSH, and unfortunately, sometimes hackers do too! In fact, if you have a server exposed to the public Internet and have ever bothered to check your authentication logs (/var/log/auth.log on Debian-based systems), you might have been surprised at just how many ssh attempts your machine constantly gets. What’s happening here is called an SSH brute force attack. A number of attackers have realized that often the easiest way to compromise a Linux server is to guess a user’s password. If one user (or common role account, like oracle or nagios for instance) happens to use a password that’s in an attacker’s dictionary, then it’s just a matter of time before a script guesses it.
So how do you protect against an SSH brute force attack? One way would be to audit user passwords and enforce a strict password-complexity policy. Another might be to pick a different port for SSH, hoping that obscurity will save you. Yet another involves setting up systems that parse through SSH attempts and modify your firewall rules if too many attempts come from a single IP. Despite the fact that you can risk locking yourself out with systems like this, attackers have already moved on and will often only make a few attempts from a single IP in their vast botnet. Each of these methods can help reduce the risk of a successful SSH brute force attack, but it can’t eliminate it completely.
If you want to eliminate SSH brute force attacks completely, the best way is also one of the simplest: eliminate password SSH logins. If you remove password SSH logins from the equation, then attackers can guess all of the passwords they want, and even if they guess right, SSH won’t allow them to log in.
So if you remove password logins, how do you log in to SSH? The most common replacement for password-based login for SSH is to use SSH keypairs. With SSH keypairs, your client (a laptop or some other server) has both a public and private key. The private key is treated like a secret and stays on your personal machine, and the public key is copied to the ~/.ssh/authorized_keys file on the remote servers you want to log into.
Create SSH Keys
The first step is to create your SSH keypair. This is done via the ssh-keygen tool and while it accepts a large number of options and key types, we will use one that should work across a large number of servers:
$ ssh-keygen -t rsa -b 4096
The -t option selects the key type (RSA) and -b selects the bit size for the key (4096 bit), and this 4096-bit RSA key should be acceptable currently. When you run the command, it will prompt you for an optional passphrase to use to unlock the key. If you don’t select a passphrase, then you can ssh into remote servers without having to type in a password. The downside is that if anyone gets access to your private key (by default in ~/.ssh/id_rsa), then he can immediately use it to ssh into your servers. I recommend setting a passphrase, and in a later section I talk about how to use ssh-agent to cache your passphrase for a period of time (much like sudo passwords are often cached for a few minutes so you don’t have to type them with every command).
Once the command completes, you will have two new files: the private key at ~/.ssh/id_rsa and the public key at ~/.ssh/id_rsa.pub. The public key is safe to share with other people and it is the file that will get copied to remote servers, but the private key should be protected like your passwords or any other secrets and not shared with anyone.
You may want to consider creating different SSH keys for different purposes. Like using different passwords for different accounts, having different SSH keys for different accounts applies the principle of compartmentalization to your SSH keys and will help protect you if one key gets compromised. If you want to store a keypair with a different file name than the default, use the -f option to specify a different file. For instance, if you use the same computer for personal and work use, you will want to create a separate keypair for each environment:
$ ssh-keygen -t rsa -b 4096 -f ~/.ssh/workkey
The preceding command creates a workkey and workkey.pub file inside the ~/.ssh/ directory.
Copy SSH Keys to Other Hosts
Once you have your SSH keys in place, you will need to copy the contents of your public key to the ~/.ssh/authorized_keys file on the remote server. While you could just ssh into the remote server and do this by hand, a tool called ssh-copy-id has been provided to make this easy. For instance, if I wanted to copy my public key to a server called web1.example.com and my username was kyle, I would type:
$ ssh-copy-id kyle@web1.example.com
Replace the user and server with the username and server that you would use to log in to your remote server. At this point it will still prompt you for the password you use to log in to the remote machine, but once the command completes it will be the last time! Just like with regular SSH logins, if your local username is the same as the user on the remote server, you can omit it from the command. By default ssh-copy-id will copy your id_rsa.pub file, but if your keypair has a different name, then use the -i argument to specify a different public key. So if we wanted to use the custom workkey file we created previously, we would type:
$ ssh-copy-id -i ~/.ssh/workkey.pub kyle@web1.example.com
The nice thing about the ssh-copy-id command is that it takes care of setting the proper permissions on the ~/.ssh directory if it doesn’t already exist (it should be owned by its own user, with 700 permissions), and it will also create the authorized_keys file if it needs to. This will help you avoid a lot of the headaches that come along with setting up SSH keys resulting from improper permissions on the local or remote ~/.ssh directory or local key files.
Once the ssh-copy-id command completes, you should be able to ssh into the remote server and not be prompted for the remote password. Now if you did set a passphrase for your SSH key, you will be prompted for that, but hopefully you chose a different passphrase for the key than the password you use on the remote server so it’s easier to demonstrate that keys work.
Disable Password Authentication
Once you can ssh to the machine using keys, you are ready to disable password authentication. You will want to be careful with this step, of course, because if for some reason your keys didn’t work and you disable password authentication, you can risk locking yourself out of the server. If you are transitioning a machine being used by a number of people from password authentication to keys, you will want to make sure that everyone has pushed keys to the server before you proceed any further; otherwise, someone with root privileges will need to update their ~/.ssh/authorized_keys file with their public key by hand.
To disable password authentication, ssh into the remote server and get root privileges. Then edit the /etc/ssh/sshd_config file and change
PasswordAuthentication yes
to
PasswordAuthentication no
and then restart the SSH service which, depending on your system, may be one of the following:
$ sudo service ssh restart
$ sudo service sshd restart
Now, you don’t want to risk locking yourself out, so keep your current SSH session active. Instead, open a new terminal and attempt to ssh into the server. If you can ssh in, then your key works and you are done. If not, run ssh with the -vvv option to get more verbose errors. To be safe, also undo the change to /etc/ssh/sshd_config and restart your SSH service to make sure you don’t get completely locked out while you perform troubleshooting.
Working with Password-Protected SSH Keys
Some administrators enjoy the convenience of SSH keys that were created without a password. You can ssh to all your servers immediately without having to type in a password, and that can be pretty convenient. If you use a source control management tool like Git over ssh, you probably also want to avoid having to type in a password every time you push or pull from remote repositories. The downside to this approach is that without a password-protected SSH key, the security of your servers is only as good as the security behind your private SSH key. If someone gets access to your ~/.ssh/id_rsa file, they can immediately access any servers you can.
With a password-protected SSH key, even if your private key gets compromised, the attacker still needs to guess the password to unlock it. At the very least, that gives you time to create and deploy a new key, and depending on the attacker, the password-protected key may never be compromised. Password-protected keys are particularly important if you happen to store any keys on systems where you share root with other administrators. Without a password, any administrator with root on the system could log in to servers with your credentials.
You don’t have to sacrifice convenience with a password-protected SSH key. There are tools in place that make it almost as convenient as any other method while giving you a great deal of security. The main tool that makes SSH key passwords more manageable is the ssh-add utility. This tool is part of the ssh-agent utility; it allows you to type the password once and caches the unlocked key in RAM using SSH agent. Most Linux desktop systems these days have SSH agent running in the background (or via a wrapper like Gnome keyring). By default, it caches the key in RAM indefinitely (until the system powers down); however, I don’t recommend that practice. Instead, I like to use ssh-add much like sudo password caching. I specify a particular time period to cache the key, after which I will be prompted for a password again.
For instance, if I wanted to cache the key for 15 minutes, much like sudo on some systems, I could type:
$ ssh-add -t 15m
Enter passphrase for /home/kyle/.ssh/id_rsa:
Identity added: /home/kyle/.ssh/id_rsa (/home/kyle/.ssh/id_rsa)
Lifetime set to 900 seconds
Note that I was able to specify the time in minutes to the -t argument by appending “m” to the end; otherwise, it assumes the number is in seconds. You will probably want to cache the key for a bit longer than that, though; for instance, to cache the key for an hour you could type:
$ ssh-add -t 1h
Enter passphrase for /home/kyle/.ssh/id_rsa:
Identity added: /home/kyle/.ssh/id_rsa (/home/kyle/.ssh/id_rsa)
Lifetime set to 3600 seconds
From now until the key expires, you can ssh into servers and use tools like Git without being prompted for the password. Once your time is up, the next time you use SSH you will be prompted for a password and can choose to run ssh-add again. If the key you want to add is not in the default location, just add the path to the key at the end of the ssh-add command:
$ ssh-add -t 1h ~/.ssh/workkey
What I like to do is use this tool like a personal timer. When I start work in the morning, I calculate the number of hours or minutes from now until when I want to go to lunch and set the ssh-add timer to that number. Then I work as usual and once the next Git push or ssh command prompts me for a password, I realize it’s time to go grab lunch. When I get back from lunch I do the same thing to notify me when it’s time to leave for the day.
Of course, using a tool like this does mean that if an attacker were able to compromise your machine during one of these windows where the key is cached, she would be able to have all of the access you do without typing in a password. If you are working in an environment where that’s too much of a risk, just keep your ssh-add times short, or run ssh-add -D to delete any cached keys any time you leave your computer. You could even potentially have your lock command or screensaver call this command so it happens every time you lock your computer.
AppArmor1
The UNIX permissions model has long been used to lock down access to users and programs. Even though it works well, there are still areas where extra access control can come in handy. For instance, many services still run as the root user, and therefore if they are exploited, the attacker potentially can run commands throughout the rest of the system as the root user. There are a number of ways to combat this problem, including sandboxes, chroot jails, and so on, but Ubuntu has included a system called AppArmor, installed by default, that adds access control to specific system services.
AppArmor is based on the security principle of least privilege; that is, it attempts to restrict programs to the minimal set of permissions they need to function. It works through a series of rules assigned to particular programs. These rules define, for instance, which files or directories a program is allowed to read and write to or only read from. When an application that is being managed by AppArmor violates these access controls, AppArmor steps in and prevents it and logs the event. A number of services include AppArmor profiles that are enforced by default, and more are being added in each Ubuntu release. In addition to the default profiles, the universe repository has an apparmor-profiles package you can install to add more profiles for other services. Once you learn the syntax for AppArmor rules, you can even add your own profiles.
Probably the simplest way to see how AppArmor works is to use an example program. The BIND DNS server is one program that is automatically managed by AppArmor under Ubuntu, so first I install the BIND package with sudo apt-get install bind9. Once the package is installed, I can use the aa-status program to see that AppArmor is already managing it:
$ sudo aa-status
apparmor module is loaded.
5 profiles are loaded.
5 profiles are in enforce mode.
/sbin/dhclient3
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/named
/usr/sbin/tcpdump
0 profiles are in complain mode.
2 processes have profiles defined.
1 processes are in enforce mode :
/usr/sbin/named (5020)
0 processes are in complain mode.
1 processes are unconfined but have a profile defined.
/sbin/dhclient3 (607)
Here you can see that the /usr/sbin/named profile is loaded and in enforce mode, and that my currently running /usr/sbin/named process (PID 5020) is being managed by AppArmor.
AppArmor Profiles
The AppArmor profiles are stored within /etc/apparmor.d/ and are named after the binary they manage. For instance, the profile for /usr/sbin/named is located at /etc/apparmor.d/usr.sbin.named. If you look at the contents of the file, you can get an idea of how AppArmor profiles work and what sort of protection they provide:
# vim:syntax=apparmor
# Last Modified: Fri Jun 1 16:43:22 2007
#include <tunables/global>
/usr/sbin/named {
#include <abstractions/base>
#include <abstractions/nameservice>
capability net_bind_service,
capability setgid,
capability setuid,
capability sys_chroot,
# /etc/bind should be read-only for bind
# /var/lib/bind is for dynamically updated zone (and journal) files.
# /var/cache/bind is for slave/stub data, since we're not the origin
#of it.
# See /usr/share/doc/bind9/README.Debian.gz
/etc/bind/** r,
/var/lib/bind/** rw,
/var/lib/bind/ rw,
/var/cache/bind/** rw,
/var/cache/bind/ rw,
# some people like to put logs in /var/log/named/
/var/log/named/** rw,
# dnscvsutil package
/var/lib/dnscvsutil/compiled/** rw,
/proc/net/if_inet6 r,
/usr/sbin/named mr,
/var/run/bind/run/named.pid w,
# support for resolvconf
/var/run/bind/named.options r,
}
For instance, take a look at the following excerpt from that file:
/etc/bind/** r,
/var/lib/bind/** rw,
/var/lib/bind/ rw,
/var/cache/bind/** rw,
/var/cache/bind/ rw,
The syntax is pretty straightforward for these files. First there is a file or directory path, followed by the permissions that are allowed. Globs are also allowed, so, for instance, /etc/bind/** applies to all the files below the /etc/bind directory recursively. A single * would apply only to files within the current directory. In the case of that rule, you can see that /usr/sbin/named is allowed only to read files in that directory and not write there. This makes sense, since that directory contains only BIND configuration files—the named program should never need to write there. The second line in the excerpt allows named to read and write to files or directories under /var/lib/bind/. This also makes sense because BIND might (among other things) store slave zone files here, and since those files are written to every time the zone changes, named needs permission to write there.
Enforce and Complain Modes
You might have noticed that the aa-status output mentions two modes: enforce and complain. In enforce mode, AppArmor actively blocks any attempts by a program to violate its profile. In complain mode, AppArmor simply logs the attempt but allows it to happen. The aa-enforce and aa-complain programs allow you to change a profile to be in enforce or complain mode, respectively. So if my /usr/sbin/named program did need to write to a file in /etc/bind or some other directory that wasn’t allowed, I could either modify the AppArmor profile to allow it or I could set it to complain mode:
$ sudo aa-complain /usr/sbin/named
Setting /usr/sbin/named to complain mode
If later I decided that I wanted the rule to be enforced again, I would use the aa-enforce command in the same way:
$ sudo aa-enforce /usr/sbin/named
Setting /usr/sbin/named to enforce mode
If I had decided to modify the default rule set at /etc/apparmor.d/usr.sbin.named, I would need to be sure to reload AppArmor so it would see the changes. You can run AppArmor’s init script and pass it the reload option to accomplish this:
$ sudo /etc/init.d/apparmor reload
Be careful when you modify AppArmor rules. When you first start to modify rules, you might want to set that particular rule into complain mode and then monitor /var/log/syslog for any violations. For instance, if /usr/sbin/named were in enforce mode and I had commented out the line in the /usr/sbin/named profile that granted read access to /etc/bind/**, then reloaded AppArmor and restarted BIND, not only would BIND not start (since it couldn’t read its config files), I would get a nice log entry in /var/log/syslog from the kernel to report the denied attempt:
Jan 7 19:03:02 kickseed kernel: [ 2311.120236]
audit(1231383782.081:3): type=1503 operation="inode_permission"
requested_mask="::r" denied_mask="::r" name="/etc/bind/named.conf"
pid=5225 profile="/usr/sbin/named" namespace="default"
Ubuntu AppArmor Conventions
The following list details the common directories and files AppArmor uses, including where it stores configuration files and where it logs:
/etc/apparmor/: This directory contains the main configuration files for the AppArmor program, but note that it does not contain AppArmor rules.
/etc/apparmor.d/: You will find all the AppArmor rules under this directory along with subdirectories that contain different sets of include files to which certain rule sets refer.
/etc/init.d/apparmor: This is the AppArmor init script. By default, AppArmor is enabled.
/var/log/apparmor/: AppArmor stores its logs under this directory.
/var/log/syslog: When an AppArmor rule is violated in either enforce or complain mode, the kernel generates a log entry under the standard system log.
Remote Logging
Logs are an important troubleshooting tool on a server but they are particularly useful after an attacker has compromised a server. System logs show every local and SSH login attempt, any attempt to use sudo, kernel modules that are loaded, extra file systems that are mounted, and if you use a software firewall with logging enabled it might show interesting networking traffic from the attacker. On web, database, or application servers, you also get extra logging from attempting accesses of those systems.
The problem is that attackers know how useful and revealing logs are, too, so any reasonably intelligent attacker is going to try to modify any log on the system that might show her tracks. Also, one of the first things many rootkits and other attack scripts do is wipe the local logs and make sure their scripts don’t generate new logs.
As a security-conscious administrator, you will find it important that all logs on a system that might be useful after an attack also be stored on a separate system. Centralized logging is useful for overall troubleshooting, but it also makes it that much more difficult for an attacker to cover her tracks since it would mean not only compromising the initial server but also finding a way to compromise your remote logging server. Depending on your company, you may also have regulatory requirements to log certain critical logs (like login attempts) to a separate server for longer-term storage.
There are a number of systems available such as Splunk and Logstash, among others, that not only collect logs from servers but can also index the logs and provide an interface an administrator can use to search through the logs quickly. Many of these services provide their own agent that can be installed on the system to ease collection of logs; however, just about all of them also support log collection via standard syslog network protocol.
Instead of going through all the logging software out there, in this section I describe how to configure a client to ship logs to a remote syslog server, and in case you don’t have a central syslog server in place yet, I follow up with a few simple steps to configure a basic centralized syslog server. I’ve chosen rsyslog as my example syslog server because it supports classic syslog configuration syntax, has a number of extra features for administrators that want to fine-tune the server, and should be available for all major Linux distributions.
Client-Side Remote Syslog Configuration
It is relatively straightforward to configure a syslog client to ship logs to a remote server. Essentially you can go into your syslog configuration (in the case of rsyslog, this is at /etc/rsyslog.conf in many cases along with independent configuration files in /etc/rsyslog.d) and find the configuration for the log file that you want to ship remotely. For instance, I may have a regulatory requirement that all authentication logs be shipped to a remote source. On a Debian-based system, those logs are in /var/log/auth.log, and if I look through my configuration files I should see the line that describes what type of events show up in this log:
auth,authpriv.* /var/log/auth.log
Since I want to ship these logs remotely as well, I need to add a new line almost identical to the preceding line, except I would replace the path to the local log file with the location of the remote syslog server. For instance, if I named the remote syslog server “syslog1.example.com,” I would either add a line below the preceding line or create a new configuration file under /etc/rsyslog.d with the following syntax:
auth,authpriv.* @syslog1.example.com:514
The syntax for this line is an @ sign for User Datagram Protocol (UDP), or @@ for Transmission Control Protocol (TCP), the hostname or IP address to ship the logs to, and optionally a colon and the port to use. If you don’t specify a port, it will use the default syslog port at 514. Now restart the rsyslog service to use the new configuration:
$ sudo service rsyslog restart
In the preceding example, I use UDP to ship my logs. In the past, UDP was preferred since it saved on overall network traffic when shipping logs from a large number of servers; however, with UDP you do risk losing logs if the network gets congested. An attacker could even attempt to congest the network to prevent logs from getting to a remote log server. While TCP does provide extra overhead, the assurance that your logs will not get dropped is worth the extra overhead. So, I would change the previous configuration line to
auth,authpriv.* @@syslog1.example.com:514
If you find after some time that this does create too much network load, you can always revert back to UDP.
Server-Side Remote Syslog Configuration
If you don’t already have some sort of central logging server in place, it’s relatively simple to create one with rsyslog. Once rsyslog is installed, you will want to make sure that remote servers are allowed to connect to port 514 UDP and TCP on this system, so make any necessary firewall adjustments. Next, add the following options to your rsyslog configuration file (either /etc/rsyslog.conf directly, or by adding an extra file under /etc/rsyslog.d):
$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$InputTCPServerRun 514
This will tell rsyslog to listen on port 514 both for UDP and TCP. You also will want to restrict what IPs can communicate with your rsyslog server, so add extra lines that restrict which networks can send logs to this server:
$AllowedSender UDP, 192.168.0.0/16, 10.0.0.0/8, 54.12.12.1
$AllowedSender TCP, 192.168.0.0/16, 10.0.0.0/8, 54.12.12.1
These lines allow access from the internal 192.168.x.x and 10.x.x.x networks as well as an external server at 54.12.12.1. Obviously, you will want to change the IPs mentioned here to reflect your network.
If you were to restart rsyslog at this point, the local system logs would grow not just with logs from the local host, but also with any logs from remote systems. This can make it difficult to parse through and find logs just for a specific host, so we also want to tell rsyslog to organize logs in directories based on hostname. This requires that we define a template for each type of log file that we want to create. In our client example, we showed how to ship auth.log logs to a remote server, so here we follow up with an example configuration that will accept those logs and store them locally with a custom directory for each host based on its hostname:
$template Rauth,"/var/log/%HOSTNAME%/auth.log"
auth.*,authpriv.* ?Rauth`
In the first line, I define a new template I named Rauth and then specify where to store logs for that template. The second line looks much like the configuration we used on our client, only in this case at the end of the line I put a question mark and the name of my custom template. Once your configuration is in place, you can restart rsyslog with:
$ sudo service rsyslog restart
You should start to see directories being created under /var/log for each host that sends the server authentication logs. You can repeat the preceding template lines for each of the log types you want to support; just remember that you need to define the template before you can use it.