Rebalancing HDFS Data
Over time, the data in the HDFS storage can become skewed, in the sense that some of the DataNodes may have more data blocks compared to the rest of the cluster’s nodes. In cases of extreme skew, the read and write activity is overly busy on the nodes with more data, and the sparsely populated nodes remain underutilized.
HDFS data also gets unbalanced when you add new nodes to your cluster. Hadoop doesn’t automatically move existing data around to even out the data distribution among a cluster’s DataNodes. It simply starts using the new DataNode for storing fresh data.
Hadoop doesn’t seek to achieve a fully balanced cluster. This state of affairs is quite hard to achieve in a cluster with continuous data flows. Instead, Hadoop is satisfied when the space usage on each DataNode is with within a certain percentage of space used by the other DataNodes. In addition, it also makes use of a threshold size to give you flexibility with the balancing of data.
Hadoop makes available a useful tool, called the balancer, to let you rebalance a cluster’s block distribution so all DataNodes store roughly equal amounts of data.
The following sections cover
Reasons for an unbalanced HDFS
Using Hadoop’s balancer tool
Setting the proper threshold value
When to run the balancer
Making the balancer run faster
Reasons for HDFS Data Imbalance
There’s no guarantee that HDFS will automatically distribute data evenly among the DataNodes in a cluster. For example, when you add a new node to the cluster, all new blocks could be allocated to that node, thus making the data distribution lopsided. When the NameNode allocates data blocks to the nodes, it considers the following criteria to determine which DataNodes get the new blocks.
Uniformly distributing data across the cluster’s DataNodes
Keeping one of the replicas of a data block on the node that’s writing the block
Placing one of the replicas on the same rack as the node writing the block, to minimize cross-rack network I/O
Spreading the block replicas across racks to support redundancy and survive the loss of an entire rack
Hadoop considers a cluster balanced when the percentage of space in a given DataNode is a little bit above or below the average percentage of space used by the DataNodes in that cluster. What this “little bit” is, is defined by the parameter threshold size.
Running the Balancer Tool to Balance HDFS Data
The aforementioned HDFS balancer is a tool provided by Hadoop to balance the data spread across the DataNodes in a cluster by moving data blocks from the over-utilized to the under-utilized DataNodes. Figure 9.8 shows the idea behind the balancer tool. Initially Rack 1 and Rack 2 have data blocks. The new rack, Rack 3, has no data initially—only newly added data will be placed there. This means adding nodes leads to an unbalanced cluster. Data is moved from the nodes with data to the new nodes, which have no data until you move data over to them from the current DataNodes or wait for new data to come in. When you run the balancer, Hadoop moves data blocks from their existing locations to the nodes that have more free space, all nodes will have roughly the same amount of used space.
Figure 9.8 How the balancer moves data blocks to the under-utilized nodes from the over-utilized nodes
You can run the balancer manually from the command line by invoking the balancer command. The start-balancer.sh command invokes the balancer. You can also run it by issuing the command hdfs –balancer. Here’s the usage of the balancer command:
$ hdfs balancer --help Usage: java Balancer [-policy <policy>] the balancing policy: datanode or blockpool [-threshold <threshold>] Percentage of disk capacity [-exclude [-f <hosts-file> | comma-separated list of hosts]] Excludes the specified datanodes. [-include [-f <hosts-file> | comma-separated list of hosts]] Includes only the specified datanodes.
The threshold parameter denotes the percentage deviation of HDFS usage of each DataNode from the cluster’s average DFS utilization ratio. Exceeding this threshold in either way (higher or lower) would mean that the node will be rebalanced.
The default DataNode policy is to balance storage at the DataNode level. The balancer doesn’t balance data among individual volumes of the DataNode, however. The alternative blockpool policy applies only to a federated HDFS service.
Setting the Proper Threshold Value for the Balancer
You can run the balancer command without any parameters, as shown here:
$ sudo –u hdfs hdfs balancer
This balancer command uses the default threshold of 10 percent. This means that the balancer will balance data by moving blocks from over-utilized to under-utilized nodes, until each DataNode’s disk usage differs by no more than plus or minus 10 percent of the average disk usage in the cluster.
Sometimes, you may wish to set the threshold to a different level—for example, when free space in the cluster is getting low and you want to keep the used storage levels on the individual DataNodes within a smaller range than the default of plus or minus 10 percent. You can do so by specifying the threshold parameter, as shown here:
$ hdfs balancer –threshold 5
The amount of data moved around during rebalancing depends on the value of the threshold parameter. If you use the default value of 10 and the average DFS usage across the cluster is, for example, 70 percent, the balancer will ensure that that each DataNode’s DFS usage lies somewhere between 60 and 80 percent of that DataNode’s storage capacity, once the balancing of the HDFS data is completed.
When you run the balancer, it looks at two key HDFS usage values in your cluster:
Average DFS used percentage: The average DFS used percentage in the cluster can be derived by performing the following computation:
Average DFS Used = DFS Used * 100/Present Capacity
A Node’s used DFS percentage: This measure shows the percentage of DFS used per node.
The balancer will balance a DataNode only if the difference between a DataNode’s used DFS percentage and the average DFS used (by the cluster) is greater than the threshold value. Otherwise, it won’t rebalance the cluster.
As noted previously, if you run the balancer without specifying a threshold value, it’ll use the default value of 10 as the threshold. In our case, it won’t perform any balancing, ending up as shown here (assuming all the DataNodes have a similar DFS usage as that of Node10):
$ hdfs balancer 15/05/04 12:56:36 INFO balancer.Balancer: namenodes = [hdfs://hadoop01-ns] 15/05/04 12:56:36 INFO balancer.Balancer: parameters = Balancer .Parameters[BalancingPolicy.Node, threshold=10.0, number of nodes to be excluded = 0, number of nodes to be included = 0] Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved ,,, The cluster is balanced. Exiting... May 4, 2015 12:56:37 PM Balancing took 1.47 seconds $
The balancer ran, but it wound things up pretty quickly, because it found that all nodes in the cluster have a usage that’s within the threshold value—the cluster is already balanced!
In our case, for balancing to occur, you must specify a threshold value that’s <=2. Here’s one way to run it:
$ nohup su hdfs –c "hdfs balancer –threshold 2" > /tmp/balancer.log/stdout.log 2>/tmp/balancer.log/stderr.log &
Specifying nohup and & will run the job in the background and get back control of the shell. Since a balancer job can run for quite a long time in a cluster, it’s a good idea to run it in this way.
Using hdfs dfsadmin to Make Things Easier
In our example, we used a single node, Node10, to check that node’s DFS used percentage. We then figured out that we must set the threshold to a value that is <= 2 based on this node’s DFS used percentage. But you can’t run the balancer on a specific node. So, how do you determine the threshold value when you have a larger number of nodes? It’s easy. Just pick the lowest DFS used percentage of a node in the entire cluster. You don’t have to spend a lot of time figuring out the DFS used percentages for each node. Use the hdfs dfsadmin –report command to find out everything you need in order to figure out the right threshold value.
In this example, there are 50 nodes in the cluster. I can run the dfsadmin command as follows, capturing the output in a file, since the command will print out the DFS usage reports for each node separately.
[root@hadoop01]# sudo -u hdfs hdfs dfsadmin -report > /tmp/dfsadmin.out
Look at the very top of the command’s output (in the file dfsadmin.out), where you’ll find the DFS used statistics for the entire cluster:
Configured Capacity: 608922615386112 (553.81 TB) Present Capacity: 607364914327552 (552.40 TB) DFS Remaining: 166697481228288 (151.61 TB) DFS Used: 440667433099264 (400.78 TB) DFS Used%: 72.55%
The smaller the value of the threshold parameter, the more work the balancer will need to perform and the more balanced the cluster will be. However, there’s a catch here: If you have a heavily used cluster with numerous writes and deletes of data, the cluster may never reach a fully balanced state, and the balancer will be merely moving around data from one node to another.
When you start the balancer, you’ll see the following type of output. Note how the balancer determines how many nodes are overutilized or underutilized. It’ll move data from the overutilized nodes to the rest of the cluster nodes. It also determines the actual amount of data that needs to be moved around to balance the cluster’s data distribution.
30/05/2016 10:02:26 INFO balancer.Balancer: 4 over-utilized: #A [10.192.0.55:50010:DISK, 10.192.0.24:50010:DISK, 10.192.0.54:50010:DISK, 10.192.0.25:50010:DISK] 30/05/2016 10:02:26 INFO balancer.Balancer: Need to move 8.05 TB to make the cluster balanced. #A 30/05/2016 09:07:21 INFO Balancer: Decided to move 10 GB bytes from #B 10.192.0.55:50010:DISK to 10.192.0.116:50010:DISK 30/05/2016 09:07:21 INFO balancer.Balancer: Decided to move 10 GB bytes from 10.192.0.25:50010:DISK to 10.192.0.115:50010:DISK 30/05/2016 09:07:21 INFO balancer.Balancer: Decided to move 10 GB bytes from 10.192.0.24:50010:DISK to 10.192.0.118:50010:DISK 30/05/2016 09:07:21 INFO balancer.Balancer: Decided to move 10 GB bytes from 10.192.0.54:50010:DISK to 10.192.0.110:50010:DISK 30/05/2016 09:07:21 INFO balancer.Balancer: Will move 40 GB in this iteration 30/05/2016 09:07:22 INFO balancer.Dispatcher: Successfully moved blk_1155910122_1099683676641 with size=17370340 from 10.192.0.54:50010:DISK to 10.192.0.110:50010:DISK through 10.192.0.54:50010 #B May 30, 2016 10:34:10 PM Balancing took 14.56153333333334 minutes #C $
When to Run the Balancer
A couple of guidelines as to when to run the balancer are appropriate. In a large cluster, run the balancer regularly. You can schedule a cron job to perform the balancing, instead of manually running it yourself. If a scheduled balancer job is still running when the next job needs to start, no harm’s done, as the second balancer job won’t start.
It’s a good idea to run the balancer right after adding new nodes to the cluster. When you add a large number of nodes at once and run the balancer afterwards, it’ll take quite a while to complete its work.
Making the Balancer Run Faster
Ideally you must run the balancer during periods when the cluster is being lightly utilized, but the overhead is usually not high. You can adjust the bandwidth of the balancer to determine the number of bytes per second that each DataNode in the cluster can use to rebalance its data.
The default value for the bandwidth is 10MB per second and you can raise it to make the balancer complete its work faster. You can raise the bandwidth up to about 10 percent of your network speed without any noticeable impact on the cluster’s workload. You can set the network bandwidth used by the balancer with the help of the hdfs dfsadmin command, as shown here:
$ hdfs dfsadmin -setBalancerBandwidth <bandwidth in bytes per second>
The –setBalancerBandwidth option enables you to change the network bandwidth consumed by each DataNode in your cluster during an HDFS block balancing operation. The bandwidth you specify here is the maximum number of bytes per second that will be used by each DataNode in the cluster. If you’re using a shell script to invoke the balancer periodically, you can specify the bandwidth option in the script before invoking the balancer. Here’s an example showing how to change the bandwidth to 20MB.
$ hdfs dfsadmin -setBalancerBandwidth 20971520 Balancer bandwidth is set to 20971520 for hadoop01.localhost/10.192.0.22:8020 Balancer bandwidth is set to 20971520 for hadoop01.localhost/10.192.0.51:8020 $
Make sure that you have adequate bandwidth before increasing the bandwidth. You can find out the speed of your NIC card by issuing the following command:
$ ethtool eth0 ... Speed: 1000Mb/s Duplex: Full ... $
In this example, the network has a speed of 1,000MB per second, so it’s safe to set the balancer bandwidth to about 10 percent of it, which is 100MB per second.
When the balancer runs for a long time, you can schedule it to run with different bandwidths during peak and off peak times. You can run it with a low bandwidth during peak times and run it with a higher bandwidth during periods when the cluster is less busy. For example, during peak times, you can schedule a cron job such as the following for the balancer (bandwidth of 10MB):
$ su hdfs -c 'hdfs dfsadmin -setBalancerBandwidth 10485760' $ nohup su hdfs -c 'hdfs balancer' > /tmp/balancerstderr.log 2> /tmp/balancerstdout.log &
You can at the same time schedule a different cronjob to run at off-peak times, with a higher (20MB) setting for the bandwidth parameter:
$ su hdfs -c 'hdfs dfsadmin setBalancerBandwidth 20971520>' $ nohup su hdfs -c 'hdfs balancer' > /tmp/balancerstderr.log 2> /tmp/balancerstdout.log &
Only one balancer job can run at a time. When the second (off-peak) job starts, it stops the first balancer job and starts a new balancer job with the higher bandwidth setting.