Reclaiming HDFS Space
Oftentimes you can conserve HDFS storage space by reclaiming used space where you can. There are two ways in which you can reclaim space allocated to HDFS files:
You can remove the files or directories once you’re done processing them.
You can reduce the replication factor for a file.
Removing files works well with the raw data files you load into HDFS for processing, and the reduction of the replication factor is a good strategy for handling older and less-critical HDFS files.
Removing Files and Directories
Periodic removal of unnecessary data is an operational best practice. Often, data needs to be retained only for a specific period of time. You can stretch your storage resources by removing any files that are just sitting in HDFS and eating up valuable space.
Decreasing the Replication Factor
You can configure the replication factor at the cluster level by setting the dfs.replication parameter in the hdfs-site.xml file, as explained in Chapter 4, “Planning for and Creating a Fully Distributed Cluster.” The setting you configure with the dfs.replication parameter sets a global replication factor for the entire cluster.
It’s important to understand that while you can set the replication factor at the cluster level, you can modify the replication factor for any existing file, with the –setRep command. This offers great flexibility, as you can set the replication factor based on the importance and usage of data. For example, you can lower the replication factor for historical data and raise the replication factor for “hot” data, so more nodes can process the data.
You can change the global replication factor anytime by configuring the dfs.replication parameter. Hadoop will either add or remove replicas across the cluster based on whether you increase or decrease the global replication factor.
Note how this behavior is different from how the fs.block.size parameter works. The fs.block.size parameter sets the block size for the cluster. When you change the value of this parameter, it won’t change the block size of files already in HDFS. It’ll use the new block size only for new files that are stored in HDFS.
Applications can also specify the replication factor on a per-file basis. You can change the replication factor for a file anytime with the hdfs dfs –setRep option. You can change the replication factor for a single file with this command:
$ hdfs dfs –setRep –w 2 /data/test/test.txt
You can change the replication factor for all files in a directory by adding the –R option as shown here:
$ hdfs dfs –setRep –w 2 –R /data/test
You can reduce the amount of HDFS space occupied by a file by simply reducing the file’s replication factor. When you reduce the replication factor using the hdfs dfs –setrep option, the NameNode sends the information about the excess replicas to the DataNodes, which will remove the corresponding blocks from HDFS.
Here’s an example showing how to reduce the replication factor from the default level of 3 to 2:
Issue the following command to check the current replication factor for the file.
$ hdfs dfs -ls /user/hive/warehouse/customer/year=2016/month=12/day=31 -rw-r--r-- 3 alapati analysts 60226324 2016-02-01 01:07 /user/hive/warehouse/customer/year=2015/month=01/day=31/ CustRecord-20150131_040_28049_20150131235718-000001-0.avro
The number 3 next to the file permission list indicates the replication factor for this file.
Change the replication factor from 3 to 2 with the following command:
$ hdfs dfs -setrep -R -w 2 /user/hive/warehouse/customer/year=2015/month=12
You can check to make sure that the replication factor has been changed to 2 from 3.
$ hdfs dfs -ls /user/hive/warehouse/shoprecord/year=2016/month=01/day=31 -rw-r--r-- 2 alapati analysts 60226324 2016-02-01 01:07 /user/hive/warehouse/customer/year=2015/month=01/day=31/CustRecord- 20160131_040_28049_20160131235718-000001-0.avro
Optionally, you can also add the –w flag with this command, to wait for the replication to complete, but this takes a long time for some files. You can see that the replication factor has changed to 2 for the file.
$ hdfs dfs -ls /user/hive/warehouse/customer/year=2016/month=01/day=31 -rw-r--r-- 2 alapati analysts 60226324 2015-02-01 01:07 /user/hive/warehouse/customer/year=2015/month=01/day=31/ ShoppingRecord-20160131_040_28049_20160131235718-000001-0.avro
In the example here, I changed the replication factor for a file. If you specify a directory instead of a file, the setrep command will recursively change the replication factor for all files that are under the directory name you specify.
Although I discussed reducing the replication factor as a way to conserve storage, for important data, you can also try increasing the replication factor. You can also set a higher replication factor for data that’s in demand (hot data).